If “[e]very media era gets the fabulists it deserves,” as Nicholas Hune-Brown writes, then such fabulists, if they’re lucky, get the profiles they merit.
Nick’s feature about the strange, sad case of “Victoria Goldiee”—a phantom writer whose spree of bylines, in publications ranging from the Guardian to Architectural Digest, have all the watermarks of chatbot prose—is the must-read piece of this closing year. With 2025 bringing flirty AI companions and lawsuits against AI giants and a looming AI bubble, his tale about the ease with which synthetic voices can now pass for human hits with the force of a horror story.
Goldiee, who claimed she had written for The Walrus, never did. She did, however, pitch a number of editors here, including me. Nick’s crack reporting revealed the bullet we dodged. But his forensic unravelling of a fraudster is also an act of profound creative counterparting, as authentic in its precisions as Goldiee’s are phony. By his example, he holds up the standard her tainted pieces—and what our ChatGPT era threatens to foist on us—can never meet.
I have known Nick for years—known and admired him both as an editor and colleague. We talked, by Zoom and later email, about what the Goldiee-ization of journalism might mean for our industry.
Two years ago, Sports Illustrated published a batch of fictitious stories by non-existent writers. All of it was AI-generated—even the author portraits. They swiftly took the content down. It isn’t hard to find similar examples of generative fakery. So, in one sense, there’s nothing fundamentally new in what you reported. The increasingly synthetic nature of journalism is something we have to deal with and worry about all the time. But your story feels different. Why do you think the reaction to it has been so dramatic?
That’s a good question. The example of the Sports Illustrated stories is, in my mind, pretty reprehensible. But, in that case, the shell of a once great magazine decided to use AI to create stories. This is more a situation where you see publications being duped by bad actors who, aided by technology, are sneaking in stuff that appears totally made up. And if you’re an editor, that’s a scarier prospect. Because you’re not choosing to do that. You’re getting tricked. And I think that element of deception makes it different.
And why? To be clear, what makes it scarier?
Whenever I’ve gone into journalism classes for the last twenty years, I’m always saying that an amazing pitch can be this key that opens doors you never thought could be opened. That’s how I got my career, just writing good pitches. Someone doesn’t have to know you, but they can see your ability on the page. Now, as an editor, I open my inbox and I understand there doesn’t need to be any link between the pitch and the person. They can be completely separate things. And I don’t know how you work in that kind of world.
That brings me to the next part: why do it? Sure, journalism has always had its fabricators. Stephen Glass, who you cite, being an obvious example. We ran our own piece last year about another fabulist, Blair Mastbaum, who figured out how to game the system for a repeat customer, Atlas Obscura. But in both cases, the deceptions were the act of an otherwise gifted writer working in deranged fealty to a very traditional idea of reporting. They lied, yes, but they came up with the lies themselves! It was still work! But to hand over the entire commissioning cycle to AI—from pitch to draft—feels like a win with no victory. Why go through the effort of making no effort?
This is what I was obsessing about for months. I don’t have any clear answers. At certain points, I thought Goldiee was one name out of many bylines that a content farm was using the same way they might be writing other email scams. I don’t think that’s the case anymore. I think it’s a real individual. I can speculate about psychology or whatever, but one thing to remember is what this technology enables in a global world of wild inequalities. AI can make gaps in geography or cultural understanding disappear. Based on what this individual has written, and some elements in their social media before it disappeared, I believe they’re either from or still live in Nigeria. And if that’s the case, a terrible word rate at a publication in the United States could actually be a pretty decent payday—especially if you just have to enter it into ChatGPT.
Fascinating. That almost makes it less a story about deception and more one about economic conditions.
I think this is something we’re going to see across different industries. After I published the story, I heard from a social scientist who said they encountered the same thing when trying to do qualitative studies. This is where you put out a call and offer a small honorarium, right? And in the last year, they’ve looked at the IP addresses from participants when they come online, and it seems many are in Nigeria. I don’t know why Nigeria exactly. But these people were using AI to give generic responses, and then, in the Zoom call, turning off the camera and, between long pauses, presumably ChatGPTing answers to the researcher’s questions. So, she was kind of at a loss about how to keep doing the qualitative studies that they usually do.
I’ve written a bit about AI’s effect on poetry and it has reshaped expectations about what poems are. One of the conclusions I came to was that it’s harder to spot an AI poem because, over the decades, our expectations for poetry have lowered to the point where the benchmarks we used to judge its quality have disappeared. We have made it easier for people to use chatbots and claim it’s a poem because we have relentlessly chipped away at any kind of framing for judging the art. Is there an equivalent in journalism? Have we diminished our expectations for what journalism is to the point where ChatGPT can easily step in and do the work?
100 percent.
Because some of the places that published her were not fly-by-night places. As you hint, maybe they’re under pressure to feed an endless demand for content, maybe the editorial teams are leaner and just don’t have the resources to vet stuff. Or the staff is being crowded into revenue-generating activities that make it harder to do the one thing that might have helped: edit properly. What jolted me about your piece, to be honest, is that it’s more than a judgment on the Victoria Goldiees of our time. It’s also a judgment on an industry that found her writing credible enough to let through.
Like I wrote, this is only possible in an incredibly degraded media environment. Some of the pieces that went up, of course, weren’t fact checked, but could not even have been read with a critical eye.
Let’s flip this around. Does it really matter if there’s no human behind the text? Does it fundamentally change the meaning of what’s there? Me, I like to think that there’s a type of irreducible value created by the knowledge that someone wrote something—that it wasn’t a machine. But is that just sentimental? Do you feel that it’s vital the texts we publish are written by actual people?
I see no value in a piece of writing that, structurally, looks like it’s done by a human but does not actually convey any human experience. It has no value to me, even if I might not be able to spot it at this point, unfortunately. That piece in the Guardian that I mentioned in my story, about experiencing underground music and moving through all these spaces in England. It was very vividly written and moving in a way that stirred some commenters. But it didn’t express anything real. I mean, I’m not sure Goldiee ever lived in England.
But you were impressed by it.
It was a good ChatGPT piece. It was impressive, and I could see why anyone would be fooled by it. I could see why they would enjoy it. But it has no value to me if it’s not created by a person. When I think about most of the work we do at The Local, I can’t see any way a computer can do it—phone someone up, talk to them, discover new things. That has value. You’re asking an interesting philosophical question. I’m just not into any of it right now.
My editor-in-chief role at The Walrus has overlapped almost perfectly with the ChatGPT era. And so, I’ve been obsessing about it a fair bit. I find it’s shifting how I think about pitches, moving away from questions like, “Is this interesting enough to report out?” and towards, “Is there a mind at work here that readers want to spend time with?” We’re inching toward more seasoned writers—writers who can turn a phrase, and have vocal print. And we know that because they’ve published other things. They have a voice and have built a career around that voice.
I wonder if the threat is going to drive us back towards a New Journalistic mindset, where words are expected to leap off the page. Bravura turns an AI can’t quite replicate, because its sense of language isn’t tied to actual experiences and an urgency to understand. I find myself starting to reconnect with the idea that writerliness is the expression of an organism, that Le style, c’est l’homme même, as the French put it. Sentences carry the rhythm of someone’s gait. Cadence mimics breath. Syntax is about temperament. The style isn’t just like a cosmetic thing. It’s in the body. It’s somatic. Kevin Patterson writes differently than Brett Popplewell who writes differently than Cathrin Bradbury who writes differently than Courtney Shea. Chatbots can replicate the skin of their sentences, but can’t incarnate the substance.
That is probably too weird for what we’re hoping for in this conversation!
No, that’s interesting. I probably thought that way a year ago. I thought there was something innately human that a chatbot can never synthetically recreate. And I don’t think so anymore. I don’t know if there’s a way anymore to see a piece of writing and recognize the human soul behind its structure. Maybe there are still some stilted ChatGPTisms that give it away—people always flag em dashes, which I take such offence to as someone that uses a lot of them! But in a few months even those tells might be gone. So, my instinct is almost the opposite. I think the thing that cannot be replaced at all is the reporting. I think I value the person who has sat through events, who has picked up the phone and spoken to real people, sometimes cases no one else has spoken to. I’m thinking that is the differentiating quality.
Maybe we need a marriage of both? Writers who write in a way that’s inherent to them and who can couch that writing in reporting that’s also original.
The other thing you said that struck me, and is kind of tragic, is that the easiest way to tell that someone’s legit and that the pitch is good is their body of work. If they’ve written for you, and you know them, that is a safe way to do things. Part of what we want to do at The Local, part of what I’m always trying to do, is develop new writers. I know that’s what you folks try to do as well. That’s literally in our mandate. If I’m trying to assign seven stories, I want two of them to be to new and emerging writers who maybe haven’t done this before, but that you trust that you can work with them, train them up. I don’t know how that trust can happen now. I don’t know how young freelancers get their foot in the door in this new world. We’re not going to give up on it and just work with people we know, but it’s definitely something we have to figure out.
I wonder how much of this filtering can be handed over to AI systems. Are editors going to be using the same tools teachers use?
Detection software doesn’t always work. It also gives a lot of false positives. Many universities have stopped using it for that reason, so I don’t know if that’s a good option.
Is this our equivalent of doping in sports—something we have to stay vigilant against?
Honestly, I think you guys are okay. I don’t think the Victoria stories that got published on other websites would get published on The Walrus or The Local. I think you have robust editing and have a great fact-checking process. I’m not worried about that. I’m more worried about the front end being inundated with garbage. How do you find what’s good, right? I doubt those publications that got burned are going to hire back their copy desk and fact checkers, but that is the most obvious move. That is, if you actually care about this stuff. If you don’t care—which I think is the case with some of these publications who are increasingly in the content business and don’t mind where it comes from—then that’s a different story.
But that’s exactly it. I do wonder if more of the industry is becoming tolerant of this kind of synthetic content, especially if they’re already doing away with the quality-control measures. My worry isn’t just the front end. Look at what AI has done to search. Readers are already being trained to see summaries as being perfectly adequate proxies for journalism itself.
And you can see the numbers already, right? Places now see, say, 70 percent less traffic.
Yes. My worry is that it could have an effect on newsrooms—encourage them to empty their room of news. I worry that operations whose sense of journalism is maybe not as acute as ours, not as ethically rooted as ours, are starting to feel that if, well, readers don’t care, maybe it becomes an opportunity to make things a lot easier on themselves. What if The Walrus and The Local end up among pockets of rearguard thinking in an industry that will hand over a lot of its content to large language models. I mean, if you’re running a media org, couldn’t you just fire everybody and chatbot your way into clicks? Ten articles a day, twenty an hour, thirty every fifteen minutes—in different voices, on every subject under the sun, conjured instantaneously and all for the price of a subscription to Claude or ChatGPT. Isn’t that a reality we might be heading towards? What if Sports Illustrated is our future and not a blunder? If there isn’t a desk somewhere deep in Condé Nast mulling this, I’d be surprised.
That would be a way to kill what remains of the journalism industry. You can’t give an inch to it. You can’t have a mix of things that are true and not true in your news publication or the whole thing is destroyed. There’s no space for that at all. That’s where I’m at right now. And I think, economically, it doesn’t make any sense. It might in the short term, but as search gets degraded, as this fake stuff gets proliferated across the internet, there is going to be real value in being able to go to a place and know the stuff there is true and human. I hope, and trust, it will be something readers find valuable.
That’s a great place to end. Thank you, Nick.





