Skip to content

2024 Will See More Than 50 Elections Around the World. What Risk Does AI Pose?

With generative AI available in a variety of languages, there is a great likelihood that this software will further election misinformation, disinformation and conspiracy theories.

Words: Heather Ashby
Pictures: Mika Baumeister

This year will be one of the most consequential in recent memory, as more than 50 elections will take place in countries across the world covering nearly two billion people. With the role of technology increasing in a multitude of sectors, communications technology (i.e., social media platforms and messaging apps) and artificial intelligence (AI) are poised to have varying levels of impact on the elections in 2024. Without strong collaboration and planning between peacebuilders, civil society, technology companies and governments, the fallout from unmanaged technology use around the elections will be far reaching, from an increasing inability to discern fact from fiction to distrust in democratic political processes.

AI’s development and use will continue to consume global governance discussions. While those conversations are taking shape, political candidates are already applying the technology to electoral campaigning. From Pakistan to the United States, AI is serving as a resource for candidates to spread their message more easily. Political campaigns have used AI to robocall voters, draft campaign messages and create automated messages. Over the course of 2024, voters may find themselves interacting with AI as companies are able to build software to hold conversations with voters and answer questions about a candidate.

Role of Artificial Intelligence

In countries with diverse populations who speak different languages, politicians are using AI to replicate their voices in a variety of languages for outreach. In India, Prime Minister Narendra Modi is employing AI to communicate in Hindi and Tamil. In addition to politicians using AI, individuals are creating memes and songs representing the likeness of politicians, which beyond humor can be the source of malicious deepfakes. The proliferation of AI is lowering the cost for campaigns and individuals to harness the technology to spread a particular message.

Across the world, politicians are finding different ways to use AI to support their campaigns. In Mexico, for example, candidate Xochitl Gálvez is using an AI avatar, iXochitl, as a spokesperson to share her message with voters. Launched on social media, iXochitl engages with the public and responds to statements from opposition candidates.

Beyond politicians utilizing AI to engage with voters, online chatbots — some incorporated into search engines — built on generative AI will serve as a source for election-related information in many countries where the software is available. Unfortunately, recent tests have shown that AI chatbots can provide inaccurate information for voters about the location of voting sites, candidates’ positions on issues and reference sources that can further election misinformation. With generative AI available in a variety of languages, there is a great likelihood that this software will further election misinformation, disinformation and conspiracy theories.

Considering the number of elections taking place globally, it will be incredibly challenging for technology and communications companies and messaging apps to tackle the misinformation, disinformation and deepfakes that may proliferate.

AI has lowered and will continue to reduce the costs for state and nonstate actors to create and spread deepfakes, which involves manipulated images and videos. Anyone with some coding knowledge and access to different software applications that are available for download on cellphones or through a web browser can create a deepfake with little effort. Deepfakes have ranged from videos that misrepresent the mental health of a politician to creating pornographic images of women, particularly female journalists and officials in the public sphere from the United States to India. The vast majority of deepfakes are pornographic images of women. This type of online abuse combined with other forms of digital harassment will likely be prominent against women running for office in 2024 or reporting on elections and requires increased attention to avoid women reducing their participation in the public sphere, which would be incredibly detrimental for democracy.

Deepfakes can also inflame racism and other forms of prejudice prevalent within a society. With elections often underscoring societal divides, deepfake videos or images misrepresenting or targeting groups or individuals will compound the challenges civil society, peacebuilders, technology companies and governments face when seeking to uphold the integrity of democratic processes.

Election Misinformation and Disinformation

The nature of misinformation and disinformation may vary based on the social media platform, messaging app or other communication mediums such as television or radio. While 2024 will feature the first wave of elections in an age of increased AI awareness and use, misinformation and disinformation do not need AI in order for inaccurate and malicious information to circulate within a country. If not already an issue for many countries, misinformation and disinformation will take place before voting starts, during the voting period and after if the results are particularly close or a candidate and their supporters view the outcomes with suspicion. Countries can also expect conspiracy theories to emerge which will also contribute to polluting the information space for voters.

Considering the number of elections taking place globally, it will be incredibly challenging for technology and communications companies and messaging apps to tackle the misinformation, disinformation and deepfakes that may proliferate. Throughout 2023, Google, Meta and X, formerly known as Twitter, laid off thousands of workers, many of whom served on their trust, safety and election monitoring teams. With a reduction in staff and persistent challenges in moderating content in languages of the Global South, social media companies could be overwhelmed managing the misinformation, disinformation and deepfakes proliferating around elections in different regions of the world.

How to Address Disinformation in Elections

There are actions civil society, governments, technology companies and peacebuilders can take to mitigate disruptions to elections. In Indonesia, for example, the Ministry of Communication and Information Technology is keeping track of disinformation and conspiracy theories concerning the country’s Feb. 14 election in addition to providing citizens with digital literacy education. The Safer Internet Lab, a joint effort between the Centre for Strategic and International Studies and Google, are working to combat misinformation and disinformation to help policymakers and the public. Such collaborations between the public and private sectors and think tanks are one approach to safeguard elections and attempt to reduce public distrust in democratic processes.

The nonprofit Digital Action brings together around 180 civil society organizations from across the world to share information on combatting election misinformation and disinformation. Digital Action seeks to hold social media companies and governments accountable to protect the integrity of elections. Through collaborations and inclusive organizing, Digital Action is tracking misinformation and disinformation in the Global South and bringing awareness about digital harms on social media platforms to technology companies for countries outside of Europe and North America. In comparison to countries in Europe and North America, social media companies engage in less robust content moderation, hate speech tracking and removal, and overall monitoring of inflammatory speech in countries in the Global South. To push social media companies to devote greater resources to countries outside of North America and Europe, coalitions such as the ones Digital Action are mobilizing are needed to put pressure on these corporations to demand strategies and approaches to reduce online harms, particularly during and after elections.

If ever there was a need for fact-checking organizations, the time is now. Many civil society and peacebuilding organizations are at the forefront of trying to reduce and prevent violent conflict. Ahead of and during Nigeria’s election in 2023, a coalition of organizations worked together to verify claims from politicians and debunk misinformation and disinformation on social media platforms. What helped the coalition to tackle misinformation and disinformation was understanding the media landscape in the country and how people receive their news, anticipating what issues were susceptible to inaccurate information, and quick deployment of factual details on social media or in cooperation with more traditional media through partnerships with journalists. Such coalitions are needed in different countries and with funding support.

This year will represent a big test for democracy, which continues to be under stress from a range of threats, including polarization, the rise of authoritarianism and technology-enabled misinformation and disinformation. For 2024 and beyond, the international community needs to demonstrate the continued value of democracy, and no better way to advertise this governance model than through citizens’ full participation in fair elections.

This article was originally published by the United States Institute of Peace.

Heather Ashby

Heather Ashby is the associate director for USIP’s program on disruptive technologies and artificial intelligence.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.