One in three using AI for emotional support and conversation, UK says

Chris VallanceSenior Technology Reporter

Getty Images A view of a data center hallway lined with dark cabinets bathed in light. The mood is ominous. Getty Images

One in three adults in the UK use artificial intelligence (AI) for emotional support or social interaction, according to research published by a government body.

According to the AI ​​Security Institute (AISI), 1 in 25 people turn to technology for support or conversation every day. said in his first report.

The report is based on two years of testing the abilities of more than 30 unnamed advanced AIs, covering areas critical to safety, including cyber skills, chemistry and biology.

The government said AISI's work will support its future plans by helping companies solve problems “before their artificial intelligence systems are widely used.”

AISI's survey of more than 2,000 UK adults found that people primarily use chatbots such as ChatGPT for emotional support or social interaction, followed by voice assistants such as Amazon's Alexa.

The researchers also analyzed what happened to an online community of more than two million Reddit users who were discussing AI companions when the technology went down.

The researchers found that when chatbots went down, people reported self-reported “withdrawal symptoms,” such as feelings of anxiety or depression, as well as trouble sleeping or neglecting their responsibilities.

Double your cyber skills

Beyond the emotional impact of using AI, AISI researchers looked at other risks posed by the technology's accelerating capabilities.

There is serious concern about AI enabling cyberattacks, but it can equally be used to protect systems from hackers.

The report said its ability to detect and exploit security flaws in some cases “doubled every eight months.”

Artificial intelligence systems have also begun to perform expert-level cyber tasks that typically require more than 10 years of experience to complete.

The researchers also found that the influence of technology on science is also growing rapidly.

In 2025, AI models “have long surpassed human biology experts with PhDs, and performance in chemistry is quickly catching up.”

“People are losing control”

From novels like Isaac Asimov's I, Robot to modern video games like Horizon: Zero Dawn, science fiction has long imagined what would happen if AI were to escape human control.

Now, according to the report, the “worst-case scenario” of humans losing control of advanced artificial intelligence systems is “being taken seriously by many experts.”

Controlled laboratory tests have shown that artificial intelligence models are increasingly demonstrating some of the capabilities needed to replicate themselves online.

AISI investigated whether the models could perform simple versions of the tasks required in the early stages of self-replication, such as “passing the knowledge tests of one's customer required to access financial services” in order to successfully acquire computers on which their copies would run.

But the study found that to do this in the real world, AI systems would need to perform several of these actions in sequence “while remaining undetected,” and research suggests they currently lack the ability to do this.

The institute's experts also considered the possibility of sandbagging models—or strategically hiding their true capabilities from testers.

They found that tests showed it was possible, but there was no evidence that such tricks had taken place.

In May, artificial intelligence company Anthropic published a controversial report describing how its artificial intelligence model could blackmail-like behavior if he thought his “self-preservation” was at risk.

However, the threat from rogue AI is a source of deep disagreement among leading researchers. many of whom believe that this is exaggerated.

“Universal Prison Breaks”

To reduce the risk of their systems being used for nefarious purposes, companies employ numerous security measures.

But the researchers were able to find “universal hacks” (or workarounds) for all the models studied that would allow them to bypass this protection.

However, in some models, the time it took experts to convince systems to bypass security measures increased forty-fold in just six months.

The report also notes an increase in the use of tools that enable AI agents to perform “mission-critical tasks” in critical sectors such as finance.

But the researchers didn't take into account that AI could cause unemployment in the short term by displacing people.

The institute also did not examine the environmental impact of the computing resources needed for advanced models, arguing that its mission was to focus on “social impacts” that are closely tied to AI capabilities rather than the more “diffuse” economic or environmental effects.

Some argue that both of these technologies pose imminent and serious social threats.

And just hours before the AISI report was released, a peer-reviewed study found that the environmental impact could be more than previously thoughtand advocated for big tech companies to publish more detailed data.

Green advertising banner with black squares and rectangles forming pixels, moving from the right. The text reads:

Leave a Comment