AI stands for “Artificial Inanity”
4 Aug 2025
There’s something icky about LLM-generated text when you think it’s written by a human. I think I finally put my finger on one reason why I feel this way.
Note on the title: “Artificial Inanity” comes from Neal Stephenson’s novel Anathem.
At work I was sent a long design document and asked for my thoughts on it. As I read, I had a really hard time following it. Eventually I guessed correctly (confirmed via a follow-up conversation I had with the “author” I have “author” in quotes because, if a machine wrote it, you don’t merit being called the author of the work. ) that an LLM had generated the majority of the document. Parts of it sounded like a decent design document, but there was just way too much fluff that served only to confuse me.
When I read technical documents, I read to understand the content. In this mode of reading, I operate under the assumption that the author had a reason for choosing the words they did, and that every sentence is there to convey something that the author wishes me to understand.
This mode fails when an LLM or the like has generated the text. When I read something I know came out of a computer’s probabilistic sampling of a token-space, I have read knowing that every statement might be some hallucinated slop or incidental filler. I cannot trust that the human operator’s intent is expressed by the machine. In fact, I am confident that it is often not, but I have to waste tremendous effort trying to find that gap. Reading slop text when I think I’m reading real text is exhausting: since I am not on the alert for hallucinations or irrelevancies, every turn of phrase that seems out of place causes me to wonder why that phrase it there and what am I missing when in reality, such questions are ill formed: that was just a phrase composed by accident that sounds good but actually is devoid of much intent at all.
Intent is the core thing: the lack of intent is what makes reading AI-slop so revolting. There needs to be a human intent—human will and human care—behind everything that is demanded of our care and attention. Even if you agree with Rolland Barthes’ The author of The Death of the Author, an essay where Barthes argues that focusing on the author’s intent is fruitless—the meaning of a text is the effect it has on the audience. views on literary criticism, the fact that there is an author who put care and intent into a work imbues that work with infinitely more meaning than if it were spat out by a machine.
Counterfeits to human connection will—unfortunately—always be in demand. The multi-billion dollar industry churning out pornography is proof enough. People will probably always, from here on out, be using LLMs to cheat their way through classes and themselves out of learning. Some might turn to them for some faux-companionship. Others will be prompting themselves to death by offloading more and more of their reasoning to machines, convinced that the computer—like a slot machine—somehow will let them win bigger in life.
I am not saying that LLMs are worthless—they are marvels of engineering and can solve some particularly thorny problems that have confounded us for decades. But it’s important to remember that, no matter how capable these machines get, they are not humans. And no human is so worthless as to be replaceable with a machine.