Can AI Avoid the Enshittification Trap?

I was recently on vacation in Italy. As I do these days, I plotted my route past GPT-5 to get suggestions for attractions and restaurants. The bot advised that the best choice for dinner near our hotel in Rome was a short walk along Via Margutta. It turned out to be one of the best meals I remember. When I got home, I asked the model how she chose this restaurant, but I'm hesitant to share here, in case I need a table sometime in the future (Hell, who knows if I'll even go back: it's called Babette. Call ahead to make reservations.) The response was complex and impressive. Factors included rave reviews from locals, reviews in food blogs and the Italian press, and the restaurant's famous combination of Roman and modern cuisine. Oh, and a short walk.

Something was also required on my part: trust. I had to trust that GPT-5 was an honest broker who chose my restaurant without bias; that the restaurant was not shown to me as sponsored content and did not receive a portion of my check. I could do some deep research myself to double-check the recommendations (I looked at the website), but the point of using AI is to get around this hurdle.

This experience strengthened my confidence in AI's results, but also made me wonder: as companies like OpenAI become more powerful and try to return money to their investors, will AI be prone to the decline in value that seems to characterize the technology applications we use today?

Pun

Writer and technology critic Cory Doctorow calls this erosion “enchitification.” Its premise is that platforms like Google, Amazon, Facebook and TikTok initially strive to please users, but once the companies beat out competitors, they deliberately become less useful in order to make more profits. After WIRED republished Doctorow's groundbreaking work 2022 essay Regarding this phenomenon, the term came into use mainly because people realized that it was completely true. Enshittification has been selected as the 2023 Word of the Year by the American Dialect Society. The concept has been quoted so often that it goes beyond its profanity, appearing in places where such a word would normally be held in one's nose. Doctorow just posted book of the same name on topic; the cover image is a smiley face for…guess what.

If chatbots and AI agents become jaded, it could be worse than if Google search becomes less useful, Amazon results become overloaded with ads, or even Facebook shows less social content in favor of rage-inducing clickbait.

AI is on its way to becoming a constant companion, providing one-off answers to many of our queries. People already rely on it to interpret current events and get advice on all sorts of purchasing options and even life choices. Due to the enormous cost of creating a full-fledged artificial intelligence model, it is fair to assume that only a few companies will dominate the field. They all plan to spend hundreds of billions of dollars over the next few years to improve their models and get them into the hands of as many people as possible. Right now, I would say that AI is at what Doctorow calls the “user benefit” stage. But the pressure to recoup huge capital investments will be enormous, especially for companies whose user bases are locked out. These terms, Doctorow writes, allow companies to abuse their users and business customers “to recapture all the value.”

When one imagines AI enchitification, the first thing that comes to mind is advertising. The nightmare is that the AI ​​models will make recommendations based on which companies paid for the listing. This is not happening now, but artificial intelligence companies are actively exploring the advertising space. IN recent interviewOpenAI CEO Sam Altman said, “I think we can probably create some cool advertising product that is a net benefit to the user and kind of a positive for our relationship with the user.” Meanwhile, OpenAI just announced a deal with Walmart that will allow the retailer's customers to shop in the ChatGPT app. I can't imagine there would be a conflict there! The artificial intelligence search platform Perplexity has a program where sponsored results appear in clearly defined follow-up observations. But, the company promises, “these announcements will not change our commitment to maintaining a reliable service that provides you with direct and objective answers to your questions.”

Leave a Comment