Common Misunderstandings About AI in 20253

In 2025, misconceptions about AI were rampant as people struggled to understand the technology's rapid development and adoption. Here are three popular gifts to keep for the New Year.

AI models hit a wall

When GPT-5 was released in May, people wondered (not for first time) if the AI ​​crashes into a wall. Despite the significant naming update, the improvement seemed gradual. The New Yorker posted article titled “What If AI Doesn't Get Much Better Than This?” claiming that GPT-5 was “the latest product to indicate that progress in developing large language models has stalled.”

It's coming soon appeared that while an important milestone in naming, GPT-5 was primarily an attempt to provide performance at a lower cost. Five months later, OpenAI, Google and Anthropic released models showing significant progress for economically valuable tasks. “Contrary to popular belief that scaling is over,” the performance jump in Google's latest model was “as big as we've ever seen.” wrote Google DeepMind Deep Learning Team Leader Oriol Viñals after the release of Gemini 3. “You can’t see the walls.”

There is reason to wonder how exactly AI models will improve. In areas where obtaining training data is expensive (such as using AI agents as personal shoppers), progress may be slow. “Maybe AI will get better And AI will probably continue to suck in many important ways.” wrote Helen Toner, interim executive director of the Center for Security and Emerging Technologies. But the idea that progress has stalled is difficult to justify.

Self-driving cars are more dangerous than human drivers

When the AI ​​powering a chatbot fails, it usually means someone made a mistake in their homeworkor miscalculations the number of letters “r” in the word “strawberry”. When the AI ​​powering a self-driving car fails, humans can die. It's no surprise that many people are hesitant to try new technology.

In the UK survey of 2,000 adults found that only 22% felt comfortable traveling in a self-driving car. In the USA it is figure amounted to 13%. Waymo in October killed cat in San Francisco, causing outrage.

However, in many cases, autonomous cars were found to be safer than human drivers, according to the study. analysis 100 million driverless miles data from Waymo. Waymo cars were involved in nearly five times fewer crashes that resulted in injury and 11 times fewer crashes that resulted in “serious or more serious injury” than human drivers.

AI cannot create new knowledge

In 2013, mathematician Sebastian Bubeck published a paper on graph theory in a prestigious journal. “We left some open questions, and then I worked on them with Princeton graduate students,” says Bubeck, who is now a researcher at OpenAI. “We have resolved most of the open issues except one.” More than a decade later, Bubek entrusted this task to a system built on GPT-5.

“We gave him two days to think,” he says. “There was a wonderful personality that the model found, and that actually solved the problem.”

Critics argue that large language models such as GPT-5 have nothing original to offer and merely copy the information on which they were trained, earning them the ironic nickname LLM.stochastic parrots” In June, Apple published paper claiming to show that any reasoning abilities on the part of master's students are an “illusion”.

Of course, the way LLMs generate their answers different from human reasoning. They failure interpret simple charts even when they win gold medals at the top mathematics And programming competitions and “autonomously” discover “new mathematical constructions”. But struggling with simple problems apparently doesn't stop them from coming up with useful and complex ideas.

“Legal masters can certainly follow a sequence of logical steps to solve problems that require deduction and induction,” Dan Hendricks, executive director of the Center for Artificial Intelligence Security, told TIME. “Whether someone chooses to call this process 'reasoning' or something else is up to him and his vocabulary.”

Leave a Comment