AI chatbots can sway voters better than political advertisements

“One conversation with a graduate student has a very significant impact on important election decisions,” says Gordon Pennycook, a psychologist at Cornell University who worked on the study. Nature study. LLMs can persuade people more effectively than political advertising because they generate much more information in real time and use it strategically in conversations, he said.

For Nature paperResearchers recruited more than 2,300 participants to interact with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for one of the top two candidates, was surprisingly persuasive, especially when discussing the candidates' policy platforms on issues like the economy and health care. Donald Trump supporters who spoke with an AI model that favored Kamala Harris became slightly more likely to support Harris, moving 3.9 points in her favor on a 100-point scale. This was about four times the measured effect of political advertising during the 2016 and 2020 elections. The pro-Trump AI model moved Harris supporters 2.3 points toward Trump.

In similar experiments conducted ahead of the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an even larger effect. Chatbots changed the sentiment of opposition voters by about 10 points.

Long-standing theories of political reasoning argue that partisan voters are immune to facts and evidence that contradict their beliefs. But the researchers found that chatbots that used a range of models, including variants of GPT and DeepSeek, were more persuasive when they were asked to use facts and evidence than when they were told not to. “People update information based on facts and the information the model provides them,” says Thomas Costello, a psychologist at American University who worked on the project.

The catch is that some of the “evidence” and “facts” presented by chatbots are not true. In all three countries, chatbots defending right-wing candidates made more inaccurate statements than those defending left-leaning candidates. The underlying models are trained on huge amounts of handwritten text, which means they replicate real-world phenomena, including “political communication coming from the right, which tends to be less accurate,” according to studies of party social media posts, Costello says.

In another study published this week, V ScienceA matching group of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs to engage with almost 77,000 UK participants on over 700 policy issues, using a variety of factors such as computing power, teaching methods and rhetorical strategies.

The most effective way to make models persuasive is to instruct them to supplement their arguments with facts and evidence, and then give them additional training by providing them with examples of persuasive conversations. In fact, the most compelling model shifted participants who initially disagreed with the political statement by 26.1 points toward agreeing. “This is a really significant treatment effect,” says Coby Hackenburg, a research fellow at the British Institute for Artificial Intelligence Security who worked on the project.

Leave a Comment