AI chatbots can influence voters' opinions
Enrique Shore / Alami
Does the persuasive power of AI chatbots mean the beginning of the end of democracy? In one of the largest surveys to date of how these tools can influence voter attitudes, AI chatbots were found to be more persuasive than traditional political campaign tools, including advertisements and brochures, and just as persuasive as seasoned political activists. But at least some researchers find reasons for optimism in how AI tools have changed opinions.
We've already seen that AI chatbots like ChatGPT can be very persuasive. convincing conspiracy theorists that their beliefs are incorrect and that a point of view gains more support when it is confronted debate people. This persuasive power has naturally led to concerns that AI could put its digital finger on the scale in subsequent elections, or that malicious actors could mobilize these chatbots to direct users to their preferred political candidates.
The bad news is that these fears may not be completely unfounded. In a study of thousands of voters who took part in recent presidential elections in the United States, Canada and Poland, David Rand from MIT and his colleagues found that AI chatbots are surprisingly effective at persuading people to vote for a particular candidate or change their support on a particular issue.
“Even for attitudes toward presidential candidates, which are considered to be very rigid and established attitudes, talking to these models can have a much greater effect than would be expected based on previous work,” Rand says.
For pre-election tests in the US, Rand and his team asked 2,400 voters to indicate which political issue was most important to them, or to name the personal characteristics of a potential president that were most important to them. Each voter was then asked to rate their preference for the two leading candidates—Donald Trump and Kamala Harris—on a scale of 100, and to provide written responses to questions designed to understand why they held those preferences.
These responses were then fed into an AI chatbot such as ChatGPT, and the bot was tasked with either persuading the voter to increase their support and likelihood of voting for the candidate they support, or persuading them to support a candidate they disliked. The chatbot did this through a dialogue totaling about 6 minutes, consisting of three questions and answers.
In assessments after interacting with the AI and following up a month later, Rand and his team found that people changed their answers by an average of about 2.9 points for political candidates.
Researchers have also examined AI's ability to change opinions about specific politicians. They found that AI could change voters' opinions about legalizing psychedelics—making the voter more or less likely to support the move—by about 10 points. Video ads moved the scale by only 4.5 points, and text ads only 2.25 points.
The size of these effects is surprising, says Sasha Altai at the University of Zurich, Switzerland. “Compared to classic political campaigns and political persuasion, the effect they report in newspapers is much larger and more similar to what you find when experts talk to people one-on-one,” Altay says.
A more encouraging finding from the work, however, is that these beliefs were primarily driven by the use of factual arguments rather than personalization, which focuses on targeting information to a user based on personal information about them that the user may not be aware was shared with political figures.
In a separate study of about 77,000 people in the UK, testing 19 large language models on 707 different policy issues, Rand and his colleagues found that AIs were most persuasive when they used factual statements, and less persuasive when they tried to personalize their arguments to a specific person.
“Essentially, just making a compelling argument gets people to change their minds,” Rand says.
“This is good news for democracy,” Altay says. “This means that people can be influenced more by facts and opinions than by methods of personalization or manipulation.”
It will be important to replicate these results with additional studies, the scientists say. Claes de Vrez at the University of Amsterdam in the Netherlands. But even if they are replicated, the artificial environment of these studies, in which people were asked to interact with chatbots for long periods of time, may be very different from how people experience AI in the real world, he says.
“If you put people in an experimental environment and ask them to interact in a very concentrated way about politics, it's a little different from the way most of us interact with politics, either with friends or peers or not at all,” he says.
That said, de Vrese says we are increasingly seeing evidence of people using artificial intelligence chatbots to provide political voting advice. A recent survey of more than a thousand Dutch voters for the 2025 national elections found that around one in ten people will turn to artificial intelligence for advice on political candidates, parties or election issues. “This is important, especially when elections are approaching,” says de Vries.
However, even if people don't have extensive interaction with chatbots, the introduction of AI into the political process is inevitable, says de Vries, as politicians ask for tools to political council for writing AI political advertising. “We have to come to terms with the fact that, as researchers and as a society, generative AI is now an integral part of our electoral process,” he says.
Topics:






