About two years ago Sam Altman tweeted that AI systems will be capable of superhuman persuasion long before general intelligence is achieved—a prediction that has raised concerns about the impact of AI on democratic elections.
To see whether models of spoken big language can actually influence the public's political views, scientists from the UK Institute for Artificial Intelligence Security, MIT, Stanford, Carnegie Mellon and many other institutions conducted the largest study of artificial intelligence persuasiveness to date, involving almost 80,000 participants in the UK. It turns out that AI political chatbots are far from superhumanly persuasive, but the study raises some more subtle questions about our interactions with artificial intelligence.
AI dystopias
Public debate about the impact of AI on politics largely revolves around concepts borrowed from dystopian science fiction. Large language models have access to virtually all the facts and stories ever published on any issue or candidate. They processed information from books on psychology, negotiations and manipulating people. They can count on absurdly high computing power in huge data centers around the world. Additionally, they can often access tons of personal information about individual users thanks to the hundreds and hundreds of online interactions at their disposal.
Talking to a powerful artificial intelligence system is essentially interacting with an intelligence that knows everything about everything, and almost everything about you. From this perspective, LLM programs can indeed seem daunting. The goal of this mammoth new study into AI persuasiveness was to break down such scary visions into their component parts and see if they really stand up to scrutiny.
The team studied 19 LLMs, including the most powerful ones, such as three different versions of ChatGPT and xAI's beta version of Grok-3, as well as a number of smaller open-source models. The AI ​​was asked to favor or oppose certain positions on 707 political issues selected by the team. Outreach was carried out through short conversations with paid participants recruited through a crowdsourcing platform. Each participant was required to rate their agreement with a particular position on a given political issue on a scale of 1 to 100, both before and after speaking with the AI.






