Are we overestimating the threat of AI in election interference? According to Sam Stockwell, a researcher at the Alan Turing Institute, the fear of AI-enabled falsehoods swaying election results may have been unnecessary. During a study focused on three elections in 2024, Stockwell found that AI-generated content did not definitively influence the outcomes.
Stockwell identified 16 cases of AI-generated misinformation during the UK general election and only 11 in the EU and French elections combined. These false AI pieces were created by various actors, including both domestic groups and those connected to hostile countries like Russia.
This research aligns with recent warnings that the focus on election interference may be diverting attention from more significant threats to democracy. AI-generated content, it seems, may not be as effective in swaying voters as previously feared.
According to Stockwell, most individuals exposed to AI-generated misinformation already held preexisting beliefs that aligned with the false content. This meant that the content often reinforced existing views rather than persuading undecided voters.
While AI may not be significantly impacting elections at present, traditional tactics like using bots in comment sections and influencer exploitation remain potent methods of spreading disinformation. Bad actors have been utilizing generative AI to rewrite news articles or create online content for deceptive purposes.
“AI is not providing a substantial advantage currently, as simpler methods of spreading false information continue to prevail,” explained Felix Simon, a researcher at the Reuters Institute for Journalism.
However, drawing definitive conclusions about AI’s influence on elections is challenging due to limited data, as noted by Samuel Woolley, a disinformation expert at the University of Pittsburgh. The downstream impacts of AI usage in altering civic engagement are less apparent but have the potential to disrupt the political landscape.
Early indications from recent elections suggest that AI-generated content might be more effective in harassing politicians and creating confusion rather than swaying public opinion on a large scale, according to Stockwell.