Loading Khan's text into GPT-4o, Chakrabarti refined new versions of the model based on the work of twenty-nine other authors, including my close college friend Tony Thulathimutt. Gia Tolentino once praised Tony's stories, which say that his “deviant instincts crackle in almost every line.” I've been reading him since the early 2000s, and yet his AI clone could easily fool me. Here's an example of a line generated by artificial intelligence: “He finally counted 18 breaths and, in order to linger longer, opened a new document and composed a marriage proposal that he would send to the first man who could make him cum without dildos or videos.”
Chakrabarty began his project out of intellectual curiosity, but became increasingly concerned about its implications. Pangram, an artificial intelligence detection program, failed to tag almost all of the prose generated by its finely tuned models. This suggested that anyone with some storytelling skills could enter a plot into a finely tuned chatbot, put their name on the resulting manuscript, and try to get it published. People often downplay literature created by artificial intelligence – after all, we read books to gain access to other people's minds. But what if we can't tell the difference? When Chakrabarti returned from Japan, he invited Jane Ginsburg, a Columbia University professor specializing in copyright law, to join him and Dillon as a co-author of the research paper. Ginsburg agreed. “I don’t know what scares me is the ability to create this content,” she told me, “or the prospect that this content could actually be commercially viable.”
Chakrabarti, now a professor of computer science at Stony Brook University, recently released A preprint of the study that has not yet been peer-reviewed. The paper notes that graduate students ultimately compared thirty AI-generated passages (one simulating each author in the study) with passages written by their peers. They were not told what they were reading; they were simply asked what they liked best. In almost two thirds of cases, they preferred the quality of AI results.
Reading copyright original excerpts Beyond the AI simulations, I was amazed to find that I liked some of the simulations just as much. The AI version of Han's scene about the death of a newborn struck me as corny in places. But for me the line about mother’s singing turned out to be more surprising and accurate than the original. I also noticed some good things about the simulation Judd. IN “That's how you lose herDiaz writes: “The only thing she warned you about and swore she would never forgive was deception. “I’ll stick a machete into you,” she promised. In my opinion, the AI's delivery was more rhythmic and economical: “She told you from the very beginning that if you ever cheated on her, she would chop off your little pitot.” I studied Spanish for a couple of years, but I had to look edgeis a word for “whistling” that I had never heard before. Google's AI review showed me that in some places the word is also slang for “penis.” I decided Diazian was enough.
When I wrote to the authors whose work was used in the study, most declined to be interviewed or did not respond. But some emailed their thoughts. Lydia Davis wrote: “I think the point is certainly made: AI can produce a decent paragraph that can fool a human into thinking it was written by a certain person.” Orhan Pamuk said: “I'm sure much more accurate imitations will appear soon.”
Diaz and Sigrid Nunez agreed to give an interview. Over Zoom, I asked Diaz about cutting someone up. edge off. “Vertex“Of course, it just means ‘whistle,’” he said, apparently confused. I told him that, according to the Internet, it could also be an ambiguous expression. “My memory sucks, but in all the years I’ve been a fucking Domincian in the diaspora, I’ve never heard that,” he told me. He thought his counterpart's spoken language was geographically and historically incoherent. “I tend to write in very specific Jersey slang with a timestamp,” he said. Plus, he added, the pacing and characterization of the AI weren't very good.
Nunes called her copycat AI “totally banal.” “This is not my style, my story, my sensibility, my philosophy of life—this is not me,” she told me. “This is a machine that thinks I am like this.” When I pointed out that the experienced graduate students found the passage well written, she questioned whether they had paid close enough attention, suggesting that they had made rash judgments to be able to refer back to their writing. (She didn't like their imitations either.) “If I thought that reflected anything that actually had to do with my work, I would have shot myself,” Nunez said.



.jpg.png?w=150&resize=150,150&ssl=1)


