If Chatbots Can Replace Writers, It’s Because We Made Writing Replaceable


A few months ago, I went to a birthday party at a bar in Neepsend, an old industrial neighbourhood by the River Don in Sheffield. The bar had been a steelworks once, but now it was another example of the international style you find everywhere, from Portland, Oregon, to all the other Portlands in Canada, England, Australia, and New Zealand: exposed brick, steel beams, concrete floors, Edison bulbs. The steelworkers had been transformed into accountants and brand managers, the molten pig iron into £9 cocktails.

When we sat down for dinner, I was placed next to a German who ran a small publishing company specializing in AI-generated motivational e-books. Trying not to be rude, I asked him how he’d gotten into this line of work. He told me he had trained as an academic philosopher. A post-graduation period of unemployment had given him plenty of time to play around with AI software, and he soon realized there was more money in inspirational quotes than there ever would be in phenomenology.

I left the party feeling a bit low. Neepsend was buzzing, but all I could see were the derelict factories overhanging the river, the crumbling smokestacks sprouting vines and buddleias. This nondescript tract of the River Don was once a white-hot centre of the industrial revolution. Now that the factory work has largely been automated, an entire way of life has passed away. It hadn’t been a great life for the workers in the steel mills—when novelist George Orwell passed through the city ninety years ago, he described the back-breaking work, the low wages, the poverty, and the grime that afflicted what he called “the ugliest town in the Old World.” But it gave people a sense of purpose and identity. And god only knows the low wages, poverty, and grime haven’t left Sheffield, even if they’re being swept out of Neepsend.

Walking by the Don, surrounded by the wreckage of this vanished world, I wondered whether the German phenomenologist was a sign of things to come in my own industry. For the past eleven years, writing has been my day job as well as my hobby. In that time, I’ve been a journalist, an SEO hack, a literary critic, an editor, and a ghostwriter (one contract, rather perversely, required me to be all of those things at once). I’ve also written a novel and put together an anthology of short fiction.

Perhaps I would soon be like the men who give blacksmithing demonstrations to school groups. Gather round, children, and look at the writer at his desk—notice the empty plates and coffee mugs, the way he slouches in his chair, the purple bags under his eyes. See how long it takes him to write a paragraph? Can you believe he made less than half a dollar a word? This is what writing was like in the olden days.

When OpenAI launched its first consumer-facing AI chatbot, ChatGPT, nearly three years ago, I didn’t pay much attention. But it wasn’t long before generative AI was being talked about in terms that were, to say the least, immoderate. It was going to transform education, health care, media, and tech; it was going to make us all insanely productive, cure cancer, and strengthen national security. It was going to become a kind of imminent god, a cybernetic superconsciousness.

Naturally, not everyone thought this was a good thing. There are plenty of arguments for why AI is a menace: there’s the accuracy argument (sometimes it makes things up), the plagiarism argument (the appropriation of data to train large language models is a massive violation of intellectual property law), the bias argument (it regurgitates the racism, ableism, and misogyny of the data on which it was trained), the environmental argument (it consumes a lot of energy), the workforce argument (it’ll make a lot of jobs redundant), the literacy argument (it will be hard to teach people to read if they have a machine that can do it for them), the quality-of-digital-life argument (it fills social media with AI slop), and the authoritarianism argument (it’ll make governments more powerful and less accountable).

I think these arguments are all valid enough, but they’re beside the point. The large tech companies are gambling that AI is the future, no government wants to fall behind in the digital arms race, and the user numbers speak to the inconvenient fact that these tools are very popular.

I don’t find this hard to understand. Plenty of people don’t like writing and aren’t very good at it. They might be comfortable enough texting or posting or sending emails, but formal composition isn’t something they have to do all that often, and like most of us, their skills have risen no further than the level demanded by their work. When they find themselves needing to write a cover letter or a report, they’re aware that they’re going to be judged on the grammar, word choice, and logical flow as much as the content—so why wouldn’t they take advantage of technology that will help them get it right?

Which is why I think nostalgia for a pre-AI world misses the point. If we want to reckon honestly with what’s happening, we have to admit that writing was already pretty algorithmic before AI ever turned up.

You don’t manage to stay solvent in this line of work if you don’t develop a sense for the economic and social, as well as the aesthetic, value of writing, and this means being able to articulate why someone should pay you to organize language and information for them.

As you might imagine, ChatGPT, Claude, and DeepSeek make this harder for me to do. But I’m happy to admit that this is because these apps are quite good at tasks that require repetitive, formulaic writing and summarizing. A lot of the contracts I’ve done over my career have been pretty dull, and if these jobs were to dry up, it’s not the work I’d miss but the paycheques.

When new technology makes an industry redundant, nostalgia for the pre-disruption era is natural. But I’ve spent too many hours transcribing interviews to get sentimental about that work, and I don’t think a return to the bullshit jobs of the twenty-first century or the factory labour of the twentieth would be desirable even if it was feasible. The problem isn’t that AI is going to replace human labour, it’s that the material benefit created by gains in productivity is going to be sucked up by the asset-holding classes. Two hundred years of worker unrest in my corner of England attest to the fact that this problem existed long before computers. If you’re worried that AI is going to concentrate wealth, political power, and control over public opinion in the hands of a few oligarchs, you’re really just worried about capitalism.

Of course, for a writer, generative AI also poses more existential questions. It was uncanny at first, entering a prompt into ChatGPT and watching the lines of text appear, as if I was talking to a painfully upbeat, digital Mephistopheles. The more I’ve learned about the technology, the more I’ve found myself wondering whether I’m actually doing anything different when I sit down to write. Like a chatbot, I’ve been trained on a dataset comprised of everything I’ve ever read, every conversation I’ve ever had. Like a chatbot, I respond to prompts by generating a string of words based on patterns of usage I long ago internalized. If a lot of conversation between humans is just a matter of swapping set phrases back and forth, is there really something else going on when I write, something that only a human can do?

If there is an affirmative answer to that question, it isn’t so much about innate creativity, insightfulness, or any of the other qualities writers like to claim. Instead, it’s about glands. An AI chatbot can never do what a human writer does because an AI chatbot is not a human. I don’t mean chatbots will never produce better writing than humans (this has already happened), or that humans won’t prefer AI writing (this is already happening). I just mean that AI chatbots can’t write like humans because they don’t have human bodies: they don’t have the cortisol, adrenaline, and serotonin, the limbic system, or the genitals. They don’t get irritated or obsessed. They don’t hold grudges. They aren’t afraid of death.

There are many situations in which glands are a liability. The promise of AI is that it will remove the human messiness from research and writing, allowing us to focus on the ideas. But suggesting that chatbots can outcompete novelists or essayists is a bit like saying a motorcyclist could win the Tour de France. If the point of writing is to produce a clean, grammatical product that meets desired specifications, the chatbot has a clear advantage. But if the point of literature is for a writer to know their own mind, and for the reader to encounter another mind, things are more complicated. To claim that an AI-written essay has the same literary value as a human-written one simply because we can’t tell them apart is to mistake the point of literature entirely.

The value we place on writing, and the value we get from the writing of others, has everything to do with the fact that we are a particular type of creature, an organism with a limited lifespan and a limited ability to know or understand the world around it.

When we say that someone is a good writer, what we usually mean is that they have a special skill for helping readers see what the world looks like from their particular vantage point. And “vantage point” is not a metaphor here—literary style is always rooted in the particularities of a writer’s life. The books they’ve read most deeply, the way people in their community spoke, the unique associations they have with certain words, whether they use a pen or a typewriter or a word processor, the very fact of being born in Glasgow in the 1940s or Kapuskasing in the 2000s. Even if a writer builds their perspective around the rejection of the conditions that shaped them, the fact of that decision will linger in their sensibility like the outline of a painting on sun-faded wallpaper. The best writing is always marked by the half-acknowledged obsessions and desires and slights that led to its creation, and the idiosyncrasies of the writer’s own character.

Generative AI, however, has no idiosyncrasies of character because it has no character, just as it has no desires because it has no glands. Chatbots are based on large language models, word-prediction machines trained on vast libraries of data. When you ask ChatGPT to write you an essay about Macbeth, it uses a probabilistic model to break language down into a series of tokens representing letters, words, and phrases, and then rearranges the tokens to create the sequence of words most likely to produce an acceptable response to the query.

Essentially, it predicts what an average essay about Macbeth would look like, and then refines that average based on whatever additional input you provide (the average feminist essay, the average anarcho-feminist essay, etc.). It’s always a reflection of the mean. When the mean is what you’re looking for, it’s phenomenally useful.

This is why, for all the distaste I feel toward generative AI, I cannot help but appreciate the irony in the rise of the chatbots. For all that we claim to value novelty and individuality, the advent of industrial society and mass culture was accompanied by a concerted push toward standardization in the arts and education. This process, driven by a desire to establish objective metrics for success and easily replicable best practices, has created a culture where the appearance of variety and diversity is maintained by a highly regimented process of capitalist production, in which every artwork must be a product and every artist must be a brand.

Market pressures are now so intense, and industries have become so consolidated, that a good deal of what gets published every year already reads like a photocopy of a photocopy. In their quixotic desire to turn out consistent, frictionless consumer products guaranteed to find a market, commercial publishers have retreated into existing IP, genre trends, and the proven titles in their glorious backlists. Hence the vogue for myths and legends narrated from a new perspective, classic novels retold from the perspective of secondary characters, and the success of imprints like New York Review Books and Daunt Books, which specialize in reissuing the great novels of the twentieth century.

It’s no better in academia, where the push to establish objective metrics has led to a catastrophic overproduction of journal articles. More than 5 million are published a year, as of 2022 (up 22 percent from 2018), and yet as far back as 2007, researchers found that 50 percent of articles were read only by the author and their editors. Academic writing has never been known for its grace, but the rigidity of the form and the need to churn out material has rendered humanities scholarship little more than a wistful collage of block quotes and rote formulae. It is a monument to late capitalism as vast and appropriate as the Great Pacific Garbage Patch.

Many scholars are still trying to do good work, of course. But when an article’s real value is as a citation on a CV, it’s clear that quantity, not quality, has become the ticket to the few tenure-track jobs remaining. If chatbots can replace writers, it’s because we surrendered to the formulaic long ago.

Writing is a technology I’ve relied on daily for the past thirty years. I can imagine a world without writing, but I can’t imagine what I would be like without writing. This is the point Walter J. Ong makes in his seminal work on orality and literacy: writing both expresses and moulds the desires of the user. This is why I think the utility of LLMs is overstated. Writing isn’t just about packaging information, but wrestling with it, questioning it, trying to figure out why it matters. And crucially, it’s in the least-optimized parts—where you fiddle with the same sentence for twenty minutes and then delete the whole paragraph—that the real work happens.

I suspect this is the reason algorithmic writing (of the human if not the AI variety) is so common in academia, journalism, and other industries where you need to churn out a lot of content to stay profitable. Really grappling with the material takes a lot of time and energy, and there’s always the risk that you’ll pour weeks into a project only to have it fail. Much better to have a few templates you can rotate through: College Students Are Ruining [X], President Donald Trump’s [Y] Is a Threat to Democracy, Ten Books to Read If You Want to Understand [Z].

I don’t have anything against templates per se, and as I said before, generic content has kept a roof over my head for years. I just don’t think I’d notice the difference if the machines took over this kind of writing. And I don’t think it’s remotely the same thing as trying to explain, without falling back on the usual clichés, how it actually feels to lose your home, or go deaf, or watch a child you helped raise become an abusive drug addict.

I’m not at all optimistic about the future of my industry. But I’m hardly the first person to be made redundant by new technology, and I don’t think the kind of writing I really care about is going to become obsolete. Even if we get to a point where anyone could produce a perfect realist novel by punching a few prompts into ChatGPT, the main outcome would probably be a total devaluation of the perfect realist novel. AI’s sales pitch—that the tool will make you more creative and productive—is a fallacy. If it becomes widely available, the competitive advantage it offers will soon be wiped out by a general increase in competition. If anyone can use a chatbot to become the next Sally Rooney, then no one will become the next Sally Rooney.

But perhaps there’s a grain of hope there. Perhaps out of the wreckage of the publishing industry, human writers, obeying an urge far older than the cave paintings at Lascaux, will start crafting bizarre new literary forms to represent the world. There won’t be any money in it, but there isn’t much money in it now.

If there’s one thing I hope comes out of the AI revolution, it’s a greater respect for the various things we mean when we talk about reading and writing—a clearer differentiation between product and process, consumption and encounter. Perhaps in this automated future we could find some way of delegating all the tedious content production to our personalized chatbots, which will pitch op-eds on new housing regulations to the Globe and Mail, and write listicles for Chatelaine about the Five Best Coolers for Your Summer Break.

And while the machines chatter away, securing a modest drip of cash into our bank accounts, we could spend our time writing multi-volume accounts of a single afternoon at a dentist’s office in St. Boniface or a sonnet sequence on the esoteric meaning of Toronto’s sewer system. Far from the market and the lecture hall, we could, perhaps, rediscover the strangeness of this compulsion to write, this ridiculous and necessary belief that the right combination of words can unlock the world.

André Forget (@AYForget) is the author of In the City of Pigs and the editor of After Realism: 24 Stories for the 21st Century. He lives in Sheffield and writes the Oblomovism Substack.

Leave a Comment