HOURHumanity has long been afraid of the corrupting influence of new technologies. This truism is as old as Socrates—as old as writing itself.
“This discovery of yours will create forgetfulness in the souls of the disciples, because they will not use their memory,” Socrates. warned in 360 BC BC, one of the earliest known criticisms of writing, but it survives today only because Plato wrote it down.
More than 2,000 years later, we can easily replace the ancient philosopher's critique of writing with a collective agitation around newfangled technology that fuels fears of cognitive decline. Printing press, television, Internet, social networks. More recently, generative AI tools like ChatGPT have revived this millennial panic.
Technologies we have used for decades, such as Google search And navigation applicationsalso suggested failure warnings on cognitive processes such as memory and spatial orientation. But some researchers believe that generative AI, which has only become widely available in the past few years, could cause unprecedented damage in the long term. This is because our chats with chatbots can come across as genuine conversations with other people we trust, and it can be difficult to verify what these tools are telling us.
Over-reliance on chatbots can lead to a decline in the skills needed by people in professions.
This is especially concerning as chatbots become increasingly spitting out misinformationbut within compelling, human interactions. AI chatbots “are not just repositories of information; they mimic human conversation, adapt to user input, and can provide personalized responses,” according to 2024 paper published in Frontiers in Psychology. “This dynamic interaction may lead to a different type of cognitive dependence compared to static sources of information.”
The power we give to chatbots is “similar to the power we give to other people,” says Olivia Guest, a cognitive computer scientist at Radboud University in the Netherlands. “That’s what’s very scary.”
To better understand the impact of generative AI on our brains, small studies over the past few years have collected both quantitative and qualitative data, such as brain scans, survey results, and performance on various tasks. Similar studies have shown that the use of chatbots and other generative artificial intelligence tools could, at least in the short term, deteriorate problem solving skillsprovoke mental laziness And harm our ability to learnamong other influences.
Over-reliance on chatbots may even impair the skills people need to perform professionally, says Iris van Rooy, a computer cognitive scientist at Radboud University.
By relying on chatbots to read, write, code, or perform other work important to us, we lose the opportunity to practice these important functions, she said. And as our expertise wanes, it becomes harder to spot errors made by chatbots that would be indistinguishable to non-experts, leading to a “downward spiral.”
But it's important to take recent findings about the dangers of using chatbots with a grain of salt, says Sam Gilbert, a cognitive neuroscientist at University College London. He argues that it would be difficult to conduct “proper” controlled experiments to conclusively link regular use of any widespread technology such as AI chatbots to long-term detrimental effects on our minds – this would require comparing people who were chronically exposed to specific technologies with those who did not use them at all.
At this stage, finding an undisclosed comparison group for chatbots will be difficult, and since chatbots have only been widely available for a few years, it is too early to assess long-term effects. Additionally, Gilbert says, it would be “unethical” to deny people access to any technology in a long-term randomized trial.
Chatbots are increasingly spreading misinformation.
Gilbert research the concept of “cognitive offloading” – the process of relieving mental stress with the help of external assistants, be it pen and paper or a chatbot. Transferring information from our minds to a screen isn't necessarily harmful, Gilbert says. found in our research and may even free up space in our brains for other information.
Ultimately, alarmists' claims that we are vulnerable to “digital dementia” – a concept popularized in 2012 that links over-dependence on technology to cognitive decline – are backed by “extremely weak evidence,” says Gilbert, due to a lack of controlled experiments. Meanwhile, some studies that followed older people over two decades showed found that the use of digital technologies that predate chatbots is actually associated with a lower risk of cognitive impairment.
Gilbert also cautions that real-time changes in cognitive activity obtained from brain scans of subjects actively using generative AI do not necessarily indicate long-term dangers, but rather momentary changes during a specific task. “It just tells us how people use their brains to solve one specific problem,” Gilbert says. “We need to be very careful how we interpret this data, and to my knowledge there is no neural evidence that technology is harming our general cognitive skills.”
However, we should test whether AI tools actually provide us with better content than our own heads when, say, writing an essay or putting together a work proposal, he says. Gilbert recommends taking stock of your mental toolbox, a process known as metacognitionor think about how you think. It's important to have a good understanding of your abilities, such as writing or memory capacity, before leaning too heavily into outsourcing certain tasks to a chatbot or any other digital resource. This goes both ways, Gilbert explains: For example, someone who is overly confident in their memory skills may reject digital reminders and forget to take their medications.
“I don't think people should avoid deloading entirely,” Gilbert says. “I really think it's important to get a good handle on your abilities without a tool—and to have success with that tool—to find out if it's really helping you.”
Across academic fields, views on the use of AI in general differ sharply. While some researchers believe that the responsible use of artificial intelligence tools such as chatbots can complement human intelligence, Guest and van Rooy take a different view: they say that chatbots in their current forms fail to offer any tangible benefit due to their technical shortcomings and are, in Guest's words, “actively harmful.” Together with researchers from Europe and the USA, Guest and van Rooy recently called against “the uncritical adoption of artificial intelligence technologies in academia.” They wrote:
“We can and should reject the fact that AI results are 'good enough' not only because they aren't good, but because we have an inherent value in thinking for ourselves. We can't all produce poetry at the level of a professional poet, and perhaps to a complete beginner the results of an LLM will seem 'better' than our own attempts. But maybe that's what being human is all about: learning something new and sticking with it, even if we don't become world-famous poets.”
Enjoying Nautilus? Subscribe to our free newsletter.
Main image: ollagery / Shutterstock






