Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
This spring’s hot topic of conversation for my colleagues in higher ed was that “Everyone Is Cheating Their Way Through College” article in New York magazine. Most of the fellow professors I spoke with about this were horrified by how often students now can and do let A.I. write their papers. Others are joining their students in asking, Why not?
A surprising coalition—William Shakespeare and 17th-century scribes, as well as 21st-century elementary school teachers, anti-fascist scholars, and epidemiologists—would tell you why not.
A key principle for 17th-century scholars transcribing or translating classical or biblical texts was lectio difficilior potior: The reading that is stranger is stronger. If a word differs between two versions of the text you’re working on, you should actually choose the one that seems to make less sense. That surprising word choice is likelier to have been the original author’s meaning, because it’s likelier that a previous copyist, translator, or (eventually) typesetter replaced a surprising word with one that was more predictable than vice versa. The wisdom was: Don’t let an easy, commonsensical option erase a unique and potentially more interesting and challenging statement.
Consider what has happened to Shakespeare’s Romeo and Juliet. After Romeo Montague kills Juliet Capulet’s cousin Tybalt in their families’ feud, one character argues that Romeo should be forgiven, since Tybalt had just killed a friend of Romeo’s. Although all the early printings give that conciliating speech to Juliet’s father, Lord Capulet, editors and directors have, for centuries, almost unanimously given it instead to Romeo’s father, Lord Montague. After all, who would expect the Capulet patriarch to defend a Montague who had just killed a Capulet?
So that switch makes every kind of sense … except Shakespearean sense. By giving the speech to a more predictable source, all those editors (and every production of the play I have ever seen) squander the more interesting possibility that Juliet’s father would actually have approved her marriage to Romeo, if the Capulet father and daughter had only communicated. Earlier in the play, Shakespeare shows Lord Capulet fiercely rebuffing Tybalt for threatening to kill Romeo and his friends at the party where the lovers meet. Capulet even invites the Montague men to stay for a meal and resists marrying Juliet to another worthy suitor until Romeo is exiled; only then does Capulet stop stalling that other suitor.
These and other clues offer yet another tantalizing near miss of a happy ending in a play that is full of them. By assuming that the feud meant that no Capulet father would ever defend a Montague lad (as Juliet and Romeo disastrously assume that Lord Capulet would never accept a Montague son-in-law), editors and directors have erased Shakespeare’s fascinating complexity. A joyous marriage was possible, but the lovers in the play, and lovers of the play ever since, chose to believe instead in what seemed likely. Multiverses of better possible outcomes disappear when we look only for the most probable—a mistake that could squander the ability of our creative species to escape the political, economic, and environmental tragedies that are looming.
Text-generating A.I. programs are the sworn enemy of that lectio difficilior potior principle—and thereby of human complexity itself. What those programs essentially do is choose, with merely probabilistic variations, the words that most often follow the preceding words in the giant tangle of documents used to train the program. Let the bullshit fall where it may. And there it stays, getting stinkier, because increasingly the documents those programs are being trained on were written by other such programs.
So A.I.-generated writing says something because that is what has already been said, especially if it has often been said (the word predict literally means “before-speak”). Hence the irony of students using it for “creative writing” assignments, and of literary journals and publishers having to admonish writers not to submit A.I.-generated fiction or poems under their own names.
Even a great large language model couldn’t have written Shakespeare’s plays, because those plays are remarkable for the number of new words he deployed that have survived into modern English. You can tell some LLMs to set the “temperature” parameter high, but that just adds randomness (including dangerous hallucinations), not purposeful complexity. The result is falsehoods dangerously disguised as facts, like the report Romeo receives of Juliet’s supposed death, an error that ends up killing them both.
Writing for Readers
In recent decades, during the so-called reading wars in early-education theory, whole-language and three-cueing systems began replacing a traditional reliance on phonics. Instead of learning to sound out words by reading the letters, students were encouraged to do what these LLMs do: predict, from previous experience, what the next word is likely to be, then determine whether what they see on the page matches the picture of the word available in their memory. That pedagogical experiment is now widely viewed as a failure that has damaged generations of readers. It diminished the connection between written and spoken words and certainly made reading itself less of a voyage of discovery.
Name-brand scholars have tried to discredit the connection readers feel with creative writers: the belief that reading can put us in contact with another unique human being. In the mid-20th century, they argued that trying to understand what a poet or novelist intended was a blunder they dismissively called the “intentional fallacy.” Other elite theorists argued that an array of sociocultural forces was the true creator of any text and that any personal touches should be considered mere surface decoration.
Yet, readers—understandably, I believe—continue to care about literature as a collection of messages from authors that bridge the gaps that separate us: the reader from the author, as well as from the people the author depicts. As a verbal and communal species, human beings are always asking themselves: Why did that person say exactly that? What can I deduce about their meaning and the feelings behind it? A leading theory attributes the rapid evolution of the homo sapiens brain to those social tasks.
We care to read mostly because we want to know what another person thinks and cares about, not what words an algorithm happens to produce as it simulates knowledge. This isn’t just about teachers disliking plagiarism. This spring, thousands of students at the University of North Georgia rebelled against a plan to have their names read at graduation by an A.I. system, even though that setup would ensure that their names were pronounced correctly and could be aligned with jumbotron projections of their achievements. Students at Northeastern University and elsewhere are angry that their teachers are letting A.I. respond to their essays. Whether we’re reading or conversing, we want something to be meant, not just said.
As a longtime professor of English, I often detect the deadly anonymity, the funeral-home scent, of most A.I.-generated papers, with their slightly elevated diction and sustained mildly Ciceronian style. The least common denominator is no substitute for Percy Bysshe Shelley or Robin Wall Kimmerer. Odder writing is more valuable than obvious writing, and predictive reading is less helpful than attention to an author’s unique voice.
Writing for Freedom
In place of meaning, what readers of A.I.-generated texts get is a statistical mean that empowers other forms of meanness.
Timothy Snyder’s New York Times bestseller On Tyranny has made him a leading public intellectual: a Paul Revere warning us about the drift toward fascism. Snyder’s new book, On Freedom, dedicates 50 pages to “Unpredictability” as a crucial element in sustaining our liberties. He insists that true liberty—for individuals and societies alike—depends on developing “possible futures, unpredictable to aspiring tyrants and uncaring machines.” Accepting “normality” is surrendering those futures. A.I. writing is inherently a propaganda machine for the status quo—in source and style, but thereby in politics and beyond.
While A.I. writing epitomizes indifference, its deployment also produces toxic opposition. Tell it in your prompt what you want it to assert, and it will do that grandly: no need to review your evidence and logic in producing your argument. By creating echo chambers that pretend to be conversations, it functions as an insidious form of confirmation bias—the widely lamented psychological tendency intensifying our current political schisms.
A.I. reinforces that tendency on social media by showing us more of whatever its predictive algorithms say will grab our attention, which is done most effectively by provoking our fear and anger. Those systems strive to make us more of what its data sets say we already are—and therefore less freely and fully human. So concerns about generative A.I. text are connected to concerns about A.I.’s polarizing effect on social media. Both functions produce people, across the political spectrum, who are … predictable.
Left-wing campus speech codes and now right-wing campus speech codes both drive us into shared euphemisms, which George Orwell’s renowned 1946 essay “Politics and the English Language” warn are symptoms of rising tyranny. That essay also refutes the assumption that “any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light.”
A crucial cure for these addictive reinforcements of our assumptions, which lead to our predictability, is reading less-filtered histories and registering their unfamiliar perspectives—the function those ancient scribes were urged to enable and protect. Another cure is developing what the poet John Keats, in an 1817 letter to his brothers, called “Negative Capability, that is when a man is capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact & reason”—a quality Keats said Shakespeare possessed supremely. Making people see the infinite variety of human individuals and possibilities and undermining complacent certainties are the most persistent projects of Shakespeare’s plays. Young people’s imaginative, expressive, and independent growth depends on developing their ability to read the unexpected and discern its meaning—which is what phonics-based reading teaches in micro form. Having ChatGPT, Grok, or Gemini make your prompt into an argument is the opposite of all those achievements.
This isn’t just a literature professor’s nostalgic lament about the devaluation of writing. Recent studies from Santa Clara University, Cornell University, Massachusetts Institute of Technology, and the University of Toronto show that working with those LLMs makes people more homogenous and conventional in their opinions and less innovative in multiple ways, with fewer new connections activating across their brains.
Diversity isn’t just a code word for identity categories that higher-ed admissions and hiring committees might feel they should consider. It’s a key factor in the survival and success of our species, as epidemiologists have observed about pandemics. A new book by Henry Gee, The Decline and Fall of the Human Empire, warns that our narrowed genetic diversity (there used to be at least nine human species) and our narrowed range of food types will probably cause our extinction.
That’s true of our minds as well as our bodies. Let’s not allow the A.I. agents of thoughtless sameness to do what they do best, and worst: dulling our curiosity, and hence our powers of resistance, by overriding the time-honored sociocultural immune systems known as reading and writing. Like true lovers across distances of time and space, let’s keep writing to each other from our hearts and minds. That’s what Juliet and Romeo promised to do, before the Friar invented what seemed like a quicker, cooler, more technical way to overcome their separation. The result was tragic.