New study finds AI chatbots can influence some Canadians to change their vote

Chatting with an artificially intelligent chatbot could successfully persuade people to change their votes and influence the outcome of future elections, according to a new study.

The study, which surveyed 1,530 Canadians, also found that chatbots were more successful in persuading Canadians to change their votes than Americans.

Gordon Pennycook, a Canadian and assistant professor at Cornell University, said the purpose of the study was to find out how persuasive generative AI can be when it comes to policy.

“The answer is that it's very persuasive and more persuasive than traditional forms of political persuasion like advertising and the like,” said Pennycook, one of the study's authors.

A study published in the journal Nature found that one in 21 US respondents to an experiment in the fall of 2024 were convinced to switch their vote to Kamala Harris after interacting with an artificial intelligence chatbot, and one in 35 were convinced to switch their vote to Donald Trump.

In the Canadian portion of the study, which took place during the final week of the federal election in April, participants were asked which of 17 political issues were most important to them when deciding who to vote for in the election. All communication took place in English, and there is no information about where in Canada the participants lived.

The study found that interaction with the chatbot prompted some participants to change their voting intentions.

“In Canada, where Carney is pro-Carney, one in nine switched, which is a lot of people,” Pennycook said. “In the Poiliève support environment, when the AI ​​convinced people to vote for Poiliève, one in 13 switched.

“A lot of people are changing their minds…if you targeted it at certain right-wing voters in certain counties or precincts, you could flip the election.”

WATCH | Is the AI ​​industry in a bubble?:

If the AI ​​bubble bursts, will the entire US economy burst with it? | About this

Investors are pouring billions into artificial intelligence, and warnings of a looming AI bubble are intensifying. Andrew Chang explains what's fueling these fears and reveals the key factors that could contribute to the bubble bursting. Images courtesy of The Canadian Press, Reuters and Getty Images.

Pennycook said one reason AI chatbots can be effective in political persuasion is that they tailor their arguments to each respondent.

The study also found that the chatbot was more effective at persuading people to change their votes when it was allowed to use facts to do so.

“The persuasion effect was nearly three times stronger in the Canadian federal election than the effect observed in the US experiment, but depriving the AI ​​of the ability to use facts and evidence reduced the effect by more than half,” the authors write.

Pennycook noted that study participants took six to eight minutes to interact with the AI ​​chatbot instead of watching a quick ad.

Pennycook said the difference between U.S. and Canadian influence may be due to persistent political campaigning in the U.S.

“Americans are inundated with non-stop election content,” he said. “And that makes it much harder to switch, to change people’s minds.”

In its findings, the study found that communicating with an artificial intelligence chatbot “can have a significant impact on voter attitudes,” but said it remains to be seen how effective the technology would be if used in political campaigns.

“It is highly likely that artificial intelligence-based approaches to persuasion will play an important role in future elections—with potentially serious consequences for democracy,” the authors write.

Although the Canadian experiment was conducted during federal elections and some races were won by only a small number of votes, Pennycook doubts that this could have affected any results.

“There's no real way to know, but I think it's unlikely that this study of a thousand people would change the outcome of the election,” he said, noting that participants came from all over Canada.

WATCH | Some toys use artificial intelligence technology:

AI chatbot toys are here, but are they safe for kids?

Toys for children that use artificial intelligence to initiate conversations are hitting the market, but experts say they have seen toys that provide sexually explicit information and advice on lighting matches, and are calling for tighter regulation.

While Canada has strict rules on the use of things like advertising and other tools to persuade voters during an election campaign, Elections Canada says there are few, if any, rules related to the use of AI during an election campaign. However, someone could break the law if they use AI to falsely impersonate an election official or send out materials that falsely claim to be from an election official, political party, or candidate.

Electoral commission chief Stephane Perrault has recommended changes to election law to address potential emerging threats from AI, such as requiring election messages generated or manipulated using AI to include a transparency marker, and requiring chatbots or AI search functions to indicate in their responses where users can find official or authoritative information.

The Office of Elections Canada, which investigates complaints, said it received several complaints regarding the use of AI in recent elections. But in a June statement, Commissioner Caroline Simard said there was no indication that the use of AI had affected the results.

A man in a checkered jacket, maroon tie and glasses looks at the camera.
Fenwick McKelvey is an assistant professor at Concordia University who researches online social media platforms. (Joseph Tunney/CBC)

Fenwick McKelvey, an assistant professor of communication studies at Concordia University in Montreal and co-director of the university's Institute for Applied AI, praised the study, saying it documents how generative AI can influence voting intentions.

“We know this kind of work can be compelling,” he said.

McKelvey said political parties in other countries, such as Mexico, have already begun using chatbots as part of their persuasion strategies.

McKelvey said one cause for concern could be the combination of generative AI chatbot technology with existing databases that political parties have built based on Canadian voters – databases that are exempt from Canadian privacy laws.

“The lack of oversight of databases and the data they hold can now be used in ways that no one consented to,” he said.

McKelvey said political parties should be subject to Canada's privacy laws and the government should take steps to mitigate potential harms from the use of AI in advertising.

Leave a Comment