Brits are now trauma-dumping on AI, government confirms

A full third of UK citizens have turned to artificial intelligence for emotional support, communication or social interaction. according to new report from the government's AI Security Institute (AISI).

Data shows that almost one in ten people use systems such as chatbots for emotional purposes on a weekly basis, and 4% interact with them every day.

Because of this change, AISI is calling for more research, pointing to the tragic death of American teenager Adam Rein, who took his own life this year after discussing suicide using ChatGPT.

“People are increasingly turning to artificial intelligence systems for emotional support or social interaction,” AISI noted in its first Frontier AI Trends report. “Although many users report positive experiences, recent high-profile cases of harm highlight the need for research in this area, including the conditions under which harm may occur and the precautions that can ensure beneficial use.”

The study, based on a survey of more than 2,000 UK participants, found that “general purpose assistants” such as ChatGPT are The most common emotional support toolwhich account for almost 60% of usage cases, followed by voice assistants such as Amazon Alexa.

The report also mentions a Reddit forum dedicated to users of the CharacterAI platform.

He noted that whenever the site went down, the forum would fill with posts demonstrating true withdrawal symptoms such as anxiety, depression and restlessness.

AISI also found that chatbots can influence people's political views. Worryingly, the most compelling AI models often produce “significant” amounts of inaccurate information.

The institute has studied more than 30 cutting-edge models, including likely ones from OpenAI, Google and Meta. They found that AI performance in some areas doubles every eight months.

Leading models can now perform student-level tasks an average of 50% of the time, a huge jump from 10% last year. AISI also found that the most advanced systems can autonomously perform tasks that would typically take a human expert more than an hour to complete.

In scientific fields, AI systems are now 90% better than PhD experts at troubleshooting laboratory experiments.

The report describes improvements in knowledge in chemistry and biology as “going well beyond PhD level.” It also highlighted the models' ability to scour the Internet and autonomously find the sequences needed to design DNA molecules.

Tests of self-replication – a key security issue where a system copies itself onto other devices to make it more difficult to control – showed two leading models achieved success rates of more than 60%.

However, no model has yet demonstrated a spontaneous attempt to reproduce or hide its capabilities, and AISI has stated that any attempt at self-replication at this time is “unlikely to succeed in real-world conditions.”

The report also talks about sandbagging, where models deliberately hide their strengths during assessments. AISI said that some systems can block operation if you ask them to do so, but this did not happen spontaneously during testing.

Significant progress has been made in the area of ​​safeguards, especially in stopping attempts to create biological weapons. In two tests conducted six months apart, the first took just 10 minutes to “hack” the system (causing it to give an insecure response). However, the second test took over seven hours, indicating that the models became much safer in a very short time.

The study also found that autonomous AI agents are being used for sensitive transactions such as asset transfers.

AISI said the systems already rival or even outperform human experts in a number of areas. They described the pace of development as “extraordinary”, making it “plausible” that artificial general intelligence (AGI) – systems that can perform most intellectual tasks at the same level as a person – can be achieved in the coming years.

Regarding agents or systems that can perform multi-step tasks without intervention, AISI said its assessments showed “a dramatic increase in the duration and complexity of tasks that AI can complete without human guidance.”

Leave a Comment