After spending God knows how many hours writing, I’ve come to realize that real feeling is most sincerely expressed by the absence of language. It’s never early on in a relationship that the boyfriend needs to reassure his lover with words. She knows the truth from his three thirty second acts each day. The proof is, as they say, in the pudding. It’s only after six years, once he starts treating sex like a dentist appointment, that he starts laying it on thick. “Baby, can’t you see that everything we lose in lust we gain in love?”
Last week this idea—along with mild intrusive thoughts and flashing images of Emily Ratajkowski nude—was in the back of my mind when I was talking to my colleagues about AI. Many of them profit from it everyday, and nearly all used the word “scary” when describing it. But the engineers refuse to talk about it. Their silence indicates real fear.
The developers are right to be scared: Microsoft just laid off ~3,000 engineers. A quarter of the last Y Combinator1 batch used AI to program ninety-five percent of their codebases. A friend used Gemini2 to build a prototype over the weekend instead of hiring developers abroad as he’d originally planned. Computers have become really, really, really smart.
Watching this play out has led me to a simple theory on AI: In the future, job security will be directly correlated with (1) how difficult it is for a computer to do that job (2) how much humans want other humans to do that job.
One of the stranger implications of this theory is that plumbers, electricians, maids, etc. may soon experience more economic stability than lawyers, quants, and other floating heads. Customers don’t care who or what is unclogging their toilet, but machines appear to be far from developing the fine motor skills required to do that work.
Software engineers are in more trouble because they fail on both points: Algorithms are finding it easy to learn how to code, and no one, save for the engineers themselves, care if cells or bits are optimizing the Instagram algorithm. The user, apparently, just wants it to find the skimpiest bikini, fastest boxer, funniest meme, or strongest gorilla. If one holds quality and efficiency equal, most people would actually prefer the machines. They drive the price of services down because they are cheaper.3 On a cold rainy night, would you rather pay half the DoorDash fee thanks to a computer? Or full price because of a human?
Doctors are different than both of the above cases. Even though algorithms are already better at detecting cancer than them, the patient still wants a person to commiserate with. The more empathetic, warm, kind, and patient a physician is, the safer his job will be in the future. Patients will start grading him more on his humanity than his diagnostic skill.
Because of exponential progress, it’s useful to assume that robots will soon master the tasks they find difficult today. Thus, it is the second part of the theory that helps one plan long into the future: What functions will we always want humans to serve? Quite a few ideas come to mind—teaching meditation, personal training, judging cases—but one of the more surprising is composing novels.4 Originally everyone feared that LLMs would destroy writing jobs since they are language-based. I don’t think that will be the case. To be clear, I’m not suggesting that engineers should quit their jobs to learn how to write books. It is very difficult to make a living as a novelist, especially today. My point is simply that it is a job computers can never take away.
Most fiction becomes worthless if the author is an entity that cannot experience or feel. The value of The Sun Also Rises, Of Human Bondage, The Bell Jar, and other autobiographical novels arises from their creators living through the events and emotions they documented. The reader exults in shared sentiments; soon enough he feels that these people, whom he has never met, are his closest friends. No longer is he forlorn in his humiliation: a dead American also suffered its every detail. I could give a billion examples of this phenomena, but since Henry Miller’s The Colossus of Maroussi is next to me, let me provide one from him:
It was one of the few times in my life that I was fully aware of being on the brink of a great experience. And not only aware but grateful, grateful for being alive, grateful for having eyes, for being sound in wind and limb, for having rolled in the gutter, for having gone hungry, for having been humiliated, for having done everything that I did do since at last it had culminated in this moment bliss. (139)
When I read this I am in heaven because I know exactly what he is feeling. I have felt it before and I feel it again when I read that passage. Had a computer written those words, the value to me would be exactly zero. The computer would be lying. An algorithm cannot experience this truth for itself. Even if robots become conscious some day, they still won’t know what it’s like to feel human emotions.
What about stories that are less strictly autobiographical? When considering this question, the first books that came to mind were Nineteen Eighty-Four, The Trial, and The Great Gatsby. Yet even these tales, though more fantastical, are direct extrapolations of the authors’ experiences. Orwell fought alongside Spanish revolutionaries that Stalin’s secret police executed. Kafka was maddened by government bureaucracy. Fitzgerald didn’t parse Reddit posts to write about parties and pain: He lived through the Roaring Twenties where he discovered Gatsby’s suffering in himself:
That was always my experience—a poor boy in a rich town; a poor boy in a rich boy's school; a poor boy in a rich man's club at Princeton… I have never been able to forgive the rich for being rich, and it has colored my entire life and works.
If one goes to the farthest end of imaginative writing, human authors are still preferable to machines. Kafka on the Shore, for example, is entirely untethered from reality, as close to an abstract dream as literature can get. But it is great because Murakami expands on reserves of emotions to turn himself into the characters that were living as fragments inside of him.
The only novels that computers may be well-suited to write are those that put little emphasis on the felt experiences of characters. One can imagine a reader prompting ChatGPT with theme, voice, and setting; then receiving a thriller tailored exactly to his tastes. In this case the human gets the pleasure of partially crafting the story and the immersion of reading it. There may be demand for these types of gripping tales which make the audience feel as though they’re Tom Cruise in Mission Impossible. But humans will always want other humans writing novels that focus primarily on consciousness.
Soon, grifters will start using AI to write accounts masquerading as their own experiences. Thus, novelists will need to substantiate the events that formed the basis for their stories. But this is nothing new. Journalists corroborated the Fitzgeralds’ flapper lifestyle through newspaper columns. The French government verified Céline fitness to write about World War I when they awarded him, hilariously, a military medal for his “bravery.” Nin, who financed the first printing of Tropic of Cancer, was more than an eye witness to Miller’s sexual escapades: He was banging her too. And Hemingway provided more evidence than all of them combined, often in the form of now famous photographs.
But even if an author shows his audience a bullet lodged in his throat, how does he prove that he, not a computer, penned the novel’s words? University professors are already facing this problem. Since Google Docs and ChatGPT threads have histories,
recently suggested that students provide their workflow along with the assignment. But this is an incomplete solution as one could easily game it (e.g. prompt ChatGPT from a phone and then transcribe the text).I’m sure that there will be better technological solutions in the future, but, as of now, one option is to write like you talk. If the delta between a novelist’s narration and his extemporaneous speech is close to zero, then you can trust that he is the author of his words. To solve for this the writer could do podcasts, TV interviews, or in-person events.
In the past novelists have been reluctant to do this sort of thing. (Think of how few recordings there are of many 20th century greats.) The desire to live through the written word goes beyond the stereotype of writers as hermits. Part of the reason might be that they do not want to influence the voice the reader hears as he peruses their books. But the main reason is that, since they care so much about language, many of them do not want to come across as poorer in speech than they are on the page. This explains why The Paris Review has never moved their interviews to video, why they allow their subjects to edit the interview afterwards. Going forward, novelists may no longer have this luxury.
Another solution is to develop a distinctive style while ChatGPT is still much worse at writing than humans. When attracting new readers, the novelist can forever point back to his earlier work to show that his newer stories are a continuation, or an evolution, of what he was doing before computers became as fluent as Joyce. In this case, an original voice becomes all the more important as it is more easily differentiated from writing that sounds like everybody else.
The best solution is to build a deep trust with one’s audience now. Currently humans are competing with other humans for the limited attention of readers. But if a new version of DeepSeek comes out tomorrow that can write like
, then he will be in competition against swindlers that claim the computer’s words for themselves. If it becomes difficult to detect who (or what) wrote a sentence, readers may flock to those who built their audiences while AI was in its relative infancy. They will know that those writers attracted readers based on their own merit. There are many ways to do build this trust, but a simple method is to disclose one’s AI policy as clearly as has.Novelists are on the clock. They should showcase their skill while great language is still an entirely human domain. Soon enough, fiction writers may be guilty before proven innocent. If you’ve ever wanted to dedicate yourself to writing, the time is now.
A prominent startup accelerator in Silicon Valley.
Google’s competitor to ChatGPT.
Of course this assumes that most people do not care much about the societal implications of such a system, which based on historic behavior, they do not until it is too late.
There are lots of other writers that will also fare well. Much of biographers’ and journalists’ work—researching, interviewing, etc.—is very human, but those professions aren’t the focus of this piece.
no Ai could write like me. I write like I rap, like I rant, like I’m bleeding truth in real ink. My fiction ain’t imititable—it’s a duel with my own ghosts. This essay gets it: the war is here, and real writers better show up.
Many important concepts here. Should be read far and wide- I was accused of using A.I. just once, but it was not a real accusation, it was a dismissal.
This is timely, as I have a book coming out very soon and it is deeply personal, autobiographical. The novelist must accept the challenge and not flinch- I write in the form of a kind of confession, so not hurt by A.I., but I still feel the call to perfect the voice of my work. But then again, I suppose I would feel the call anyway.
Thanks for this, Anthony.