The era of AI persuasion in elections is about to begin

All this means that actors, whether well-resourced organizations or grassroots collectives, have a clear path to deploying politically persuasive AI at scale. The first demonstrations have already taken place in other countries of the world. Tens of millions of dollars are reported to have been spent in India's 2024 general elections. spent on artificial intelligence to segment voters, identify swing voters, deliver personalized messages using robots and chatbots, and more. In Taiwan, officials and researchers documented China-linked operations using generative AI to boost production thin disinformation ranging from deepfakes to the results of language models that target messages endorsed by the Chinese Communist Party.

It's only a matter of time before this technology makes its way into U.S. elections—if it hasn't already. Foreign adversaries are well positioned to act first. China, Russia, Iranand others already support networks of troll farms, bot accounts, and secret influence operators. When combined with open-source language models that generate fluent and localized political content, these operations can be enhanced. In fact, there is no longer a need for human operators who understand language and context. With the lighting set up, the model can portray a neighborhood organizer, a union representative, or a disgruntled parent, without anyone having ever set foot in the country. The political campaigns themselves will likely be left behind. Every major operation already segments voters, verifies messages, and optimizes delivery. AI reduces the costs of all this. Instead of testing a slogan in polls, a campaign can generate hundreds of arguments, present them one-on-one, and see in real time which ones change opinion.

The basic fact is simple: persuasion has become effective and cheap. Campaigns, PACs, foreign players, human rights groups and opportunists are all playing on the same field – and there are very few rules.

Political vacuum

Most politicians have not caught up with the situation. Over the past few years, lawmakers in the United States have focused their attention on deepfakes but ignored the broader, more compelling threat.

Foreign governments began to take the problem more seriously. European Union Artificial Intelligence Act 2024 classifies election-related belief as a “high risk” use case. Any system designed to influence voter behavior is now subject to strict requirements. Administrative tools, such as artificial intelligence systems used to plan campaign activities or optimize logistics, are exempt. However, tools aimed at shaping political beliefs or voting decisions are not.

On the contrary, the United States still refuses to draw any meaningful lines. There are no binding rules on what constitutes a political influence operation, no external standards governing enforcement, and no common infrastructure for tracking AI-generated persuasion across platforms. Federal and state governments have made gestures toward regulation: Federal Election Commission application old fraud provisions, Federal Communications Commission proposed narrow disclosure rules for broadcast advertising, and a handful from states have passed deepfake laws, but these efforts have been piecemeal and leave most digital campaigns untouched.

Leave a Comment