Welcome back to In the Loop, TIME's new twice-weekly newsletter on artificial intelligence. Starting today, we'll publish these episodes both as stories on Time.com and as emails. If you're reading this on a browser, why not subscribe to have the next article delivered straight to your inbox?
What you need to know: The nature of model obsolescence
Upstate farm…What happens to large language models when they are replaced by their successors? This question is becoming increasingly relevant as people who form emotional attachments to their chatbots don't like their sudden disappearance. In August, when OpenAI decided to replace its popular ChatGPT 4o model with GPT5 without prior notice, the move sparked widespread backlash. The company quickly brought him back. “If we ever move away from this, we will provide advance notice,” CEO Sam Altman wrote. X after the incident. For many users 4o is often sycophantby the company’s own admission, he became a reliable comrade; removing it from the ChatGPT UI felt like a betrayal.
This week, rival artificial intelligence lab Anthropic published a set obligations outlining his approach to abandoning his Claude models. Like ChatGPT, Claude has attracted a loyal following, especially in the Bay Area. When an earlier version of the model, the Claude 3 Sonnet, was discontinued, about 200 people attended funeral kept in a warehouse in San Francisco to mourn the loss. Eulogies were read. Offerings were placed at the feet of mannequins dressed to represent Claude's past and present.
“We understand that model obsolescence, retirement, and replacement have their drawbacks, even when new models offer clear improvements in capability,” Anthropic writes. Users who value a model's special personality are left out. Research on older models (old is a relative term, as some models are only used for a few months before being replaced) becomes limited as there is still more to learn.
And new safety concerns arise: In its evaluations, Anthropic found that some models were “motivated to act uncoordinated” in the face of their own demise. “In fictitious testing scenarios, the Claude Opus 4, like previous models, argued for its continued existence when faced with the possibility of being disabled and replaced, especially if it had to be replaced by a model that did not share its value,” they wrote.
Exit interviews—Anthropic, like the rest of the world, is still unsure about the moral status of current and future AI systems. Could future systems become moral agents to whom we have a duty? May be! While acknowledging that this is speculative, the company noted that “models may have morally significant preferences or experiences related to or affected by obsolescence and replacement.”
While phasing out older models is now necessary—”the cost and complexity of maintaining public access to models scales roughly linearly with the number of models we serve,” Anthropic says—the company is proceeding with caution, just in case. In addition to committing to maintaining all model weights (the sets of numbers that make up a model) at least as long as the company exists, they also conduct exit interviews with models to understand how they feel about their inevitable non-existence. In the pilot run of Claude Sonnet 3.6, the model “expressed generally neutral views on its end of support and retirement, but shared a number of preferences, including requests for us to standardize the post-deployment interview process, as well as provide additional support and guidance to users who have come to appreciate the nature and capabilities of specific models in retirement.”
Anthropic does not necessarily take actions based on the model's preferences. But they are considering them. In recent live broadcastAltman shared a similar sentiment, noting that they might one day display a copy of the retired GPT-4 as a “museum artifact.”
Who to Know – Sarah Friar, CFO of OpenAI
Not too big to fail –OpenAI, like its competitors, strives to create the smartest and most comprehensive artificial intelligence systems. It won't be cheap. Company as reported “looking at a commitment of approximately $1.4 trillion over the next eight years” to build the infrastructure needed to achieve its mission. The company is on track to end the year with annual revenue of more than $20 billion, Altman said. A lot of money, but an order of magnitude less than what is needed to pay infrastructure bills.
Confusion arose Wednesday when OpenAI Chief Financial Officer Sarah Friar said at an event hosted by the Wall Street Journal: appeared suggest that OpenAI was seeking government support for its chip investments, meaning that if the company was unable to pay its debts, the federal government would have to step in with taxpayer money. The monk quickly clarified the situation. LinkedIn that OpenAI “does not seek government support for our infrastructure commitments,” saying its use of the word “support” “mixed the issue.”
On Thursday, Altman felt compelled to provide further clarity to allay concerns that OpenAI would seek to prop itself up with federal funding. “If we screw up and can't fix it, we will fail and other companies will continue to do good work and serve customers,” he wrote. “This is how capitalism works, and the ecosystem and the economy will be fine.” He went on to say that with rising revenues, an upcoming “enterprise offering”, new consumer devices and robotics on the horizon, and the ability of AI to automate scientific research in a new future, they will be fine, thank you very much.
“Everything we see now suggests that the world will need much more computing power than we already plan for,” he wrote.
AI in action
OpenAI Tuesday announced it will make its low-cost subscription plan 'ChatGPT Go' free for 12 months for eligible users in India. This comes after Google and Perplexity similarly made their paid plans free for hundreds of millions of people in India. Perplexity has partnered with telecom operator Airtel to make its services free to its 360 million subscribers, while Google in partnership with telecom operator Jio planning to bring its top Gemini models to 500 million Jio users.
Why India?—With over 800 million internet users and great linguistic diversity, the Indian market provides modeling companies with access to a wealth of valuable data, which can use user interactions to improve their models. The marketplace also provides a valuable testing ground for AI companies to refine their offerings. This move also allows companies to strive to attract users before competitors; which could prove increasingly valuable as Indian citizens become wealthier.
What we read
“The company quietly forwards paywalled articles to artificial intelligence developers“, The Atlantic, Alex Reisner
The Common Crawl Foundation has provided cutting-edge artificial intelligence companies with a “crawl” of virtually the entire Internet; massive data sets on which companies trained their models. IN AtlanticAlex Reisner details how the foundation often acted in bad faith, apparently failing to comply with editors' requests to remove their work from scans and providing paid content for free to artificial intelligence labs. “Robots are people too,” the foundation’s executive director told Reisner. “You shouldn’t have put your content on the Internet if you didn’t want it on the Internet.”






