Robots and Humans
The Toll of AI
Plato’s fear about writing in the Phaedrus was that it would destroy our capacity to remember. Speaking through Socrates, he wrote that “if men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” This may seem hyperbolic, but ask yourself, can we actually remember a time before it?
You may complain about Plato’s Proto-Luddism, but are we really in a position to evaluate the changing quality of man’s memory when we can’t remember what it was before? Homer’s epics were passed down orally by poets, and Herodotus’s Histories were based on information from oral traditions. The idea that this kind of preservation would be possible today is nothing short of fanciful.
If Plato was right, then something about our psychology changed when we found a crutch that substituted for our own natural capabilities. That is to say, what it meant to be human changed. It is well accepted in neuroscience that the brain can rewire itself to adapt to changing circumstances far faster than Darwinian evolution typically makes changes. Similarly, the brain and the underused parts of it can atrophy as well. If, by using a technology, we end up relying less on a constitutive part of ourselves, our capacity for it diminishes. Make no mistake, despite how basic writing seems, it is, at its core, technology. It did not come immediately to human beings. We’ve been roaming the earth for hundreds of thousands of years, and the earliest written language we’ve found was Sumerian, from roughly 5000 years ago. We can’t remember a time before it, so we have no way of judging our changing capabilities.
What then, does this have to do with robots? If writing arguably changed who we are, then artificial intelligence poses a much more fundamental challenge. With writing, we can’t really remember a time before it to judge how we’ve changed. With artificial intelligence, we might be losing the ability to “judge” altogether. Cogito ergo sum (I think, therefore I am) is Descartes’s famous maxim. Aristotle likewise referred to man as the “rational animal.” Despite any daylight between the two, it is often the baseline for agreement about the nature of man himself: that, at a minimum, we are defined by our ability to reason. Human beings may reason poorly, but they do it, nonetheless. Without it, we are no more than the beasts.
In late 2022, ChatGPT was released without much fanfare. But within five days, it reached a million users. Within 2 months, it had 100 million. In October 2024 alone, it had 3.6 billion visits. It’s very quickly become a tired cliche to say that AI is changing the nature of work and education, but it is. Grammarly ran an ad earlier this year showing off how it can take writing off our hands. Students are using it to cheat en masse, lawyers are inputting complex questions of law and putting the answers into court briefs, and that’s just the tip of the iceberg. Why bother having to sit down and compose your thoughts when the robots can do it for you?
One of the most obvious rejoinders to that question is that sometimes the robots get it wrong. And it does so, often. I won’t pretend to be a Luddite, and say I haven’t used it, but when I have, it regularly “hallucinates.” This is a technical term for “makes stuff up.” Once I asked it to find me examples of academic articles that mention the use of tools in John Locke’s understanding of property rights, and it proceeded to give me three made-up articles. I told it they were fake, which it acknowledged, apologized for, and then gave me three new, definitely-not-fake, fake articles. Once again, I told it that they were fake articles. It apologized and gave me three new articles that were… you can probably tell where this is going. The lawyers I mentioned in the last paragraph have been sanctioned by courts for relying on fake cases generated by ChatGPT. Those same kids using it to cheat? They’re spitting out thousands of badly written, AI-generated essays, often forgetting to include the initial line from the program, “Sure, here’s an essay on [topic.]”
Even recently, in my own law practice, I came across a brief from the other side that was terrible, with poor and muddled analysis, inaccurate references to cases and the evidentiary record, and a type of rigid structure I’d only ever seen from an AI chatbot. This made me wonder, so I ran it through an AI writing detector to see if it might confirm my suspicions, and it returned a 100% chance that it was AI-generated. At least in this case, they didn’t include the initial prompt in the pleading, but they did leave the AI-generated, in-line footnotes to its sources that appeared as full numbers rather than footnotes when exported to Microsoft Word.
While these stories are funny, they point towards a more terrifying fact: even though the robots don’t work perfectly yet, people are outsourcing their ability to reason and think to them. ChatGPT is already good enough to surpass what a B student in high school would likely be able to submit on any given assignment or what a mid-level corporate employee would send as an internal memo. It’s a tremendously complicated algorithm that even its creators don’t fully understand anymore, but it still sounds like its fundamental nature: robotic.
Regardless of what it’s being asked to do, each output sounds disturbingly similar to the others. All writers have a “voice,” and the robots are no exception. If you’ve ever asked ChatGPT a question, you know what I’m talking about. For one, the amount of em-dashes. I, for one, lament this development. I used to love em dashes. But it’s become one of the key ways you can tell something is AI-generated, so I’ve had to forego using them for the most part, even where it makes sense to have one—like here.
What it also does really well at, like a robot, is applying fixed rules. Language is a collection of rules that, when adhered to, has a defined structure. ChatGPT and the robots do a really good job of adhering to the formal conventions of language. It’s one of the things that an English teacher harps on her students about. One thing that English teachers arguably over-emphasize is the formality of writing. I get that without a solid base or structure, you can’t build anything, and that if your writing is garbled or riddled with poorly written sentences, you’re never going to be able to say anything worth saying. But it’s always seemed to me that an excessive focus on formality, in anything, is merely a cover for a lack of creativity and ingenuity. Sure, you should probably check for typos and whatnot, but I would rather read Hemingway or Nietzsche with some mistakes than I would a completely polished Atomic Habits or You Are a Badass. But if you’re 15 years old, you’re probably not going to be saying something mind-blowing or world-historical, and probably don’t even want to try, so why not just let the robot quickly cobble something together for you that your teacher will probably prefer to your mangled jumble of words? The temptation is real.
Another sign of an AI-generated piece of writing is its genericness, inoffensiveness, and formality. It doesn’t sound like anything a human being would organically say. I used to roll my eyes when someone would say that your writing should sound like you when you’re talking, but I understand it now in 2025, even if I think it was bad advice to a bunch of kids in 2013. I used to think, why wouldn’t you want to sound like you took more time to figure out what you’re trying to say in your writing? But what those people were trying to prevent was the kind of lame, superficial, yet carefully curated, writing that ChatGPT churns out literally a billion times every day.
If you’ve ever half-assed a high school English essay the old-fashioned way, this will be familiar to you: you start off with a generic, inoffensive filler line about how [insert x book] is a fundamental/widely regarded classic that deals with [insert x characteristics]. It makes you feel like you’ve said something when all you’ve really done is knock out 10 words toward the word requirement. “990 words to go! If I keep up this pace, I should be done in an hour or two!” On the other hand, if you were asked to write an essay about something a bit more risqué, you’d probably start off by noting the controversy around it. Or, on Vishnu’s third hand, if you’re writing about something that Established Knowledge has decided is Bad™, you’d probably start off with something saying that it’s notorious/infamous. Each of these openings is relatively inoffensive, in that regardless of your specific opinion on the book, it at least puts you in the same position as the smart critics who’ve already made their judgment about the book. You may or may not be one of the genius critics in the know, but you don’t want to go against them.
Ironically, this is exactly what ChatGPT pulls together. I tested this proposition by asking ChatGPT to write me essays about three books: one widely respected (Mark Twain’s The Adventures of Huckleberry Finn), one controversial but not generally reviled (Michel Houellebecq’s Submission), and one that is considered taboo or suspect (Vladimir Nabokov’s Lolita). And guess what it did? It began each by saying Huck Finn is “widely regarded as one of the greatest works of American literature,” that Submission “is a provocative novel that explores the spiritual and political decline of Western Europe,” and that Lolita is “a novel infamous for its disturbing subject matter.”
The rest of an AI-generated writing is no more impressive. Like AI-generated writing generally, each of the essays I mentioned was a collection of unoriginal points that amounted to little more than a shorthand summary of a subject, which is exactly what most high school essays are in the first place. At least the robots know who their market is and who they’re writing for at this point. They’ve got that going for them.
The term people have used to describe this kind of factory-farmed, cliché drek is “AI slop”—which is what it is. Slop is easily made, low-effort, low-quality kitsch that’s minimally acceptable, but profoundly uninteresting and unoriginal. It’s not the robots’ fault, though. At their core, they are machines: the term LLM stands for Large Language Models. They’re large-scale prediction models that try to figure out, based on context, what the most likely collection of words is in a given sequence. They’re not designed to be original. And the only way they avoid being incorrect is by analyzing the patterns of words in whatever they have been trained on, as part of finding what the most likely next word is in a given sequence. They’re not designed to think, they’re not even designed to simulate thinking, they’re just designed to mimic the output of thinking. The robots resemble the high school student trying to B.S. an essay, because they themselves are B.S.’ing the entire act of writing and thinking.
The robots’ job is to mimic what’s already been said. Of course, part of any education is learning what to model your own habits and behavior after. But implicit within that is that you have to find the right models to emulate. If you’re a writer, you might try to mimic the eccentricity of Tom Wolfe, the bluntness of Ernest Hemingway, or the irony of Jane Austen. There is no shortage of good models to choose from, but you have to choose them. You can tell ChatGPT to write something in the style of each of them, and it’ll give you something that might pleasantly surprise you and lead you to question everything I’ve said thus far, but what it won’t do is create something new. Hemingway had to be Hemingway before he could be copied. The best writers and thinkers don’t choose models merely to copy them, but to go beyond them, to add their own flair, and create their very own voice. The robots are endlessly flexible, and while they’re a mile wide, they’re an inch deep. They aren’t trying to say anything new. They aren’t designed to give you something brilliant, just something passable enough to get a decent grade, and allow you to move on throughout the day.
Of course, most writing and thinking isn’t that original in the first place. It’s a uniquely humbling experience to think you’ve had an original thought, only to read something written decades, centuries, or millennia ago and find that it says exactly what you were thinking, just simpler and better. Most of us aren’t brilliant, and as the robots get better, it’s possible that they’ll end up surpassing what most of us are able to create. I’m skeptical that they will, but even if they do, we’d lose something along the way. We’d lose a part of ourselves.
Which brings me back to Plato. Writing wasn’t a direct substitute for memory, but it did well enough at providing the means for people to access certain information more easily. We still remember things, but having easier access to information definitely makes it less imperative that it be fixed in our brains. Anyone who’s lived through the rise of search engines knows that just because we have greater access to recorded information doesn’t mean we know more than we used to.
Sure, the oral tradition that preserved Homer’s Epics and the stories in Herodotus’s Histories probably added some details along the way, but my intuition is that what we derisively call the game of Telephone—progressively transmitting information until it’s unrecognizable due to successive accretions of details—is at least in part a testament to our reduced capacity to remember what we once could.
AI isn’t a direct substitute for reasoning, but it does well enough at providing the means for people to access certain information more easily. We still reason, but having easier access to a ready-made exposition definitely makes it less imperative that the ability to reason or think be cultivated in our brains. Anyone who’s living through the rise of the robots knows that just because we have greater access to arguments doesn’t mean we’re more rational than we used to be.
Far from it. To reason is to be human. We are unique among known animals in our capacity to reason. My dog has flickers of such light when she figures out how to open the sliding cover of her toy to get a treat, but it doesn’t go far beyond that (sorry Maisy). Consider the Imago Dei: Genesis reads that “God created mankind in his own image, in the image of God he created them.” One of the great theological mysteries is what that means exactly. One guess is that this refers to our ability to reason; that to reason, and reason well, is directly connected to the order that God created. The Bible refers to Jesus Christ as the Logos, which, in addition to being translated as “The Word,” is also translated as “Reason.” God is Right Reason, and to be an echo of that reason is to be human.
Human culture has depended on our ability to form souls and make them conform to reason. The ancients understood that a rightly ordered soul placed the Logos, or reason, at the helm and subordinated the other passions to their rightful place. Having a thinking machine might be lucratively convenient in all sorts of respects, but it’s fundamentally a loan against our own rationality and creativity. The robots couldn’t create Hemingway from scratch. The robots couldn’t conjure up Plato’s irony and nuance in his dialogues from merely reading the pre-Socratics. Nor could they write a single chapter from Nietzsche’s Thus Spoke Zarathustra. Will the next Nietzsche bother to enter the steppe? Will the rest of us even bother to try? Just who are we becoming?
The robots are clever in all sorts of ways, but arguably their greatest achievement is convincing us to not even try. We create new technologies to “fit” our needs and desires, but this relationship goes both ways. Just as often, we fit ourselves to our technologies. The principal effect of AI is not that it composes slop, but that it makes us incapable of composing anything but slop. We kneecap our own abilities for the sake of convenience, to the point that we need the technology to stand in for our voluntarily reduced capacities. It needs us as much as we need it. In using the robots, we become like them; we become less human. And we’ll never compete with them on those terms.
By using AI, we won’t lose the capacity to reason altogether, but to lose access to its highest vistas would be a tragedy on the order of the Fall from the Garden, and one not easily remedied. We’d do well to remember what it is that makes us “Human”. So, read old books, or good new ones. Write long screeds. Have long, drawn-out arguments. Experiment. Try and fail. And with it, become more fully Human.

