Theresa Reidy: Truth becomes lies and lies become truth — the threat of AI in our election

Used with malicious intent, AI has the potential to cause real disruption, indeed chaos in an election.
Michael Healy-Rae is out in front this election season with his light-hearted and entertaining use of artificial intelligence (AI) to create online content which ‘seems’ to show him being endorsed by Taylor Swift and kung fu fighting with Simon Harris.
These videos are known as ‘cheap-fakes’, and Healy-Rae joins an array of politicians around the world that have used this new technology. Javier Milei, the president of Argentina, has appeared as various cartoon characters in AI-generated images, as Wolverine from the
movies, in a picture with Elon Musk, and as an animated lion taking a chainsaw to the Argentine public debt.Donald Trump posted an AI-generated image of Kamala Harris addressing a "supposed Chinese political congress" to reinforce his message she was a dangerous communist. And AI images were also used in the local and European Parliament elections earlier this year in Ireland and across the EU.
All these images were easily identifiable as AI-generated, or at the very least, as having been modified in some form.
But used with malicious intent, AI has the potential to cause real disruption, indeed chaos in an election. Threats can come in several forms. AI tools make it much easier for political opponents to engage in character assassination through the circulation of ‘fake evidence’ in the form of images, videos, or audio recordings.
Scroll for results in your area
Before AI arrived on all our mobile phones, editing a photo or creating a fake video was a complicated and highly skilled process, but AI tools have transformed the landscape. Images, audio and video can be generated fairly easily using AI tools. These ‘deepfakes’ can be used against any politician, but women have been especially targeted.
The president of Moldova, Maia Sandu, was the subject of a Russian disinformation attack in 2023 when a fake video circulated online that presented her as a puppet of European elites. More recently, several British female politicians were affected by the circulation of deepfake pornography videos around the July general election. There has also been a case in Northern Ireland.
Deepfake porn is political violence — it is designed to demean the woman politician and, as experts have argued, to push women out of the public sphere.
Fake information videos also have the potential to undermine election integrity. Voters can be falsely encouraged to go to the wrong polling station, to stay at home and vote later, to print their name on the ballot paper. Several instances have occurred already this year around the world. There is a real challenge for electoral authorities here — AI has huge potential to assist voters.

AI bots could answer straightforward queries from voters, but wider use of these tools may also make some voters more susceptible to damaging fake information, especially when it spreads widely in private digital networks like Telegram or WhatsApp. Electoral authorities may not even be aware the deep-fake exists.
The Irish Electoral Commission has a voluntary code of conduct which covers deceptive AI and there is also an AI Elections Accord, which all the major digital platforms have signed up to. But the speed with which deep-fakes can spread online, and the damage they can cause, mean they pose a real danger to the integrity of our electoral processes and political candidates.
Media literacy and regular public discussion of these challenges are the most effective weapon in these misinformation wars.
We should also be aware of the more legitimate ways in which AI tools intersect with politics. In many sectors, AI technologies are being used to mine people’s data so more effective targeting of digital ads can be used to persuade us all to buy more stuff. Political actors are following suit with this enhanced form of micro-targeting.
The 2018 Cambridge Analytica scandal was the first time we heard about micro-targeting and how the first Donald Trump presidential campaign interrogated the personal data of millions of Facebook users and devised an advertising strategy that was targeted at specific groups of voters.
There is a big debate about whether this type of micro-targeting of political ads can be effective. But we should remember that ‘effectiveness’ means many things.
Political psychology research tells us it is very difficult to get people to change their political affiliations or to change their opinion on something important, but moving our attention to something else is a much easier task.
The AI tools that are available today can deliver ever more sophisticated types of analysis, on ever-increasing volumes of data. Micro-targeting is becoming much more nuanced. It could be used to reinforce existing opinions, to drive further polarisation, to distract us from more serious issues.
Ultimately, perhaps the greatest threat these technologies pose to our elections is that it will become more difficult for voters to determine what is real, and what is fake. If this happens, voters may simply withdraw from politics and come to distrust all information. Truth becomes lies and lies become truth, an outcome that is sometimes called the liar’s dividend.
- Dr Theresa Reidy is a political scientist at University College Cork and a co-editor of How Ireland Voted 2020 and Politics in the Republic of Ireland