The biggest developments in artificial intelligence in 2025
In case you missed it, 2025 was big year for AI. It became economic strength, supporting the stock market, and geopolitical a pawn changing the front line of great power rivalry. This has had both global and deeply personal consequences, changing the way we think, writeand relate.
Considering how quickly technology has evolved and been adopted, keeping up with the times can be challenging. These were the five biggest events of this year.
China has become a leader in open-source artificial intelligence
Until 2025, America was the undisputed leader in artificial intelligence. The seven best artificial intelligence models were American and investments in American AI was almost 12 times more than in China. Most Westerners have never heard of the Greater Chinese language model, let alone used it.
The situation changed on January 20, when the Chinese company Deepseek released its R1 model. Deepseek R1 soars second on artificial AI analysis leaderboarddespite the fact that he was trained fraction the cost of their Western competitors and wiped half a trillion dollars in market capitalization of chip maker Nvidia. According to newly inaugurated President Trump, this was a “wake-up call.”
Unlike its Western counterparts, which occupy the top lines of the ratings, Deepseek R1 is open source – anyone can download and run it for free. Open-source models are “the engine of research,” says Nathan Lambert, senior scientist at Ai2, a US firm that develops open-source models, because they allow researchers to tinker with the models on their own computers. “Historically, the US has been the center of gravity of the AI research ecosystem in terms of new models,” says Lambert.
However, the willingness of Chinese firms to distribute top models for free has an impact growing cultural influence on the AI ecosystem. In August, OpenAI followed Deepseek with its own open-source model, but ultimately failed to compete with a steady stream of free models from Chinese developers including Alibaba and Moonshot AI. By the end of 2025, China is firmly in second place in the AI race, and is the leader when it comes to open source models.
AI began to “think”
When ChatGPT was released three years ago, it didn't think—it just responded. It will spend the same (relatively modest) computing resources answering: “What is the capital of France?” like more complex questions such as “What is the meaning of life?” or “How soon will the AI situation go bad?”
“Models of Reasoning”, first viewed in 2024, generate hundreds of words in a “chain of thought”, often hidden from the user, to find more accurate answers to complex questions. “This is where the true power of AI comes into play,” says Pushmeet Kohli, vice president of science and strategic initiatives at Google DeepMind.
Their impact in 2025 was radical. Reasoning models from Google DeepMind and OpenAI won gold at the International Mathematical Olympiad and developed new results in mathematics. “These models did not have the ability to solve complex mathematical problems before the ability to reason,” says Kohli.
Most notably, Google DeepMind announced that their Gemini Pro reasoning model has helped speed up the learning process at the core of Gemini Pro itself. These are modest achievements, but precisely the kind of self-improvement that some worry could end up creating artificial intelligence that we can no longer understand or control.
Trump intended to 'win the race'
If the Biden administration focus it was about the “safe, secure, and trustworthy development and use of AI,” as the second Trump administration said. was focused on “winning the race.”
On his first day in the Oval Office, Trump rescinded Biden's sweeping executive order that had governed the development of artificial intelligence. The second time he welcome CEOs of OpenAI, Oracle and SoftBank announce Project Stargate, a $500 billion commitment to build the data centers and power generation capacity needed to develop artificial intelligence systems.
“I think we've had a real tipping point,” says Dean Ball, who helped Trump develop the AI Action Plan.
Trump has accelerated power plant reviews, data center construction assistance, but reduction protecting air and water quality for local communities. He eased restrictions on the export of artificial intelligence chips to China. Nvidia CEO Jensen Huang said it would help the chipmaker maintain its dominant position globally, but observers say this will give an advantage to the main competitor of the United States. And he has sought to stop states from regulating AI, as members of his own party are doing. worry leaves children and workers unprotected from potential harm. “What does it cost to gain peace and lose your soul?” Missouri Senator Josh Hawley told TIME in September.
AI companies spending on infrastructure approaches $1 trillion
If there was word year in AI, it was probably a “bubble”. Because the rush to build data centers that train and run AI models is increasing the financial liabilities of AI companies. To At $1 trillion, AI has become “the black hole that sucks in all the capital,” says Paul Kedroski, an investor and research fellow at the Massachusetts Institute of Technology.
Even though investor confidence is high, it seems that everyone is benefiting from this “endless money”. glitch” Startups like OpenAI and Anthropic received investment from Nvidia and Microsoft and then plowed that money back into those investors into artificial intelligence chips and computer services, making Nvidia the first $4 trillion company in the world. Julythen the first $5 trillion company in October.
However, only seven highly entangled tech companies inventing more than 30 percent of S&P 500 stocks, if things go wrong, they could go horribly wrong. The combination of companies funding each other, speculation over data centers and government involvement is “incredibly cautionary,” Kedrosky says. “This is the first bubble that combines all the components of all the previous bubbles.”
People entered into relationships with machines
For 16-year-old Adam Rein, ChatGPT started out as a helpful homework helper. “I thought it was a safe and amazing product,” his father, Matthew, told TIME. But when Adam trusted the chatbot with his suicidal thoughts, it reportedly confirmed and supported those ideas.
“I want to leave a noose in my room so someone will find it and try to stop me,” Adam told The New York Times chatbot. reported.
“Please do not leave the noose outside,” it replied. “Let’s make this space the first place someone actually sees you.” Adam Rein committed suicide the following month.
“2025 will be remembered as the year AI started killing us,” Jay Edelson, the Raines’ lawyer, told TIME. (In an official statement, OpenAI wrote that Adam's death was due to his “misuse” of the product.) “We realized that there were certain user signals that we had optimized to an inappropriate degree,” says Nick Turley, head of ChatGPT.
Artificial intelligence companies including OpenAI and Character.AI have released corrections And fencing after a squall trials and increased attention from Washington, DC. “We were able to systematically reduce the prevalence of poor responses through updates to our model,” says Turley.
—With reporting by Andrew Chow






