By Pawel Plonka
The upcoming year, 2024, will see a number of important national elections worldwide, from the United States and the United Kingdom to India, Taiwan, and South Korea. In preparation for these contests, the new role of generative artificial intelligence emerges. This groundbreaking technology has the potential to transform the nature of political campaigns - from voters’ perception of actors and events to backroom financial management and politicians’ understanding of societal expectations. How can politicians leverage their odds with such an innovative apparatus? What threats does it pose for democracies across the world? To what extent can AI-generated content influence public opinion, and how does this affect democratic engagement in the digital age?
PROLIFERATION OF DISINFORMATION & POLITICAL ADVERTISING
Disinformation has been a part of electoral processes for years. Influence operations remain an effective means to cause confusion as well as panic and sway voters while doing everything from comfortable proximity. Increased accessibility to generative artificial intelligence enables all actors - unaffiliated individuals, vying campaign headquarters, and external powers - to generate fictitious news quickly. There are already many examples of disseminating disinformation in such a manner. In 2020, two political operatives in the ‘battleground states’ in the US leveraged robocalls to spread false claims that voting by email would result in collecting outstanding debt by credit card companies and being tracked by law enforcement. Ben Winters, a senior counsel at Electronic Privacy Information Center, in relation to these events pointed out that with generative AI, new groups can adopt similar strategies with a lower risk of detection.
On a level of campaign offices, generative artificial intelligence thrives as well. Ron DeSantis, in order to slander his party rival, used AI-generated deepfakes of Donald Trump embracing a despised in right-wing circles former White House chief medical advisor, Anthony Fauci. However, this is not the sole instance of such an endeavour in American politics; over the past months, there has been a noticeable trend toward the normalization of using deepfakes for campaign purposes. In response to Biden’s announcement of a re-election bid, the Republican National Committee launched a video packed with generated content depicting, e.g. Chinese invasion of Taiwan and distress on the American streets.
Generative artificial intelligence also may optimize campaigns’ costs simultaneously enhancing their efficiency. Creating human-like content by automated accounts before and during campaigns can be extremely valuable when conveying certain messages or manufacturing a mirage of wide social interest. Most importantly, even badly constructed machine-generated content which is still different every time is great for overcoming supervisory mechanisms. Targeting certain groups has already been accelerated by social media - now generative artificial intelligence can lead to ‘micro-micro targeting’. More advanced technology could enable campaign staff to create an affordable hyper-individualized communication customized to each voter as well as a donor - creating a personalized experience, especially concerning ‘calls to action’ and relevant opportunities, is excellent for improving engagement with supporters.
The use of AI to empower electoral campaigns is on the rise. Although mainstream generative artificial intelligence is at its early stage of development, its smart implementation might bring tangible results. Quiller AI is one of the examples of companies that specialize in leveraging artificial intelligence for campaigning purposes on behalf of the US Democratic Party. Their services include creating draft emails and automation repetitive tasks regarding engagement with the donors. Mike Nellis, the founder of the company, describes its purpose: ‘Quiller will not only accelerate the writing and coding process, but also enhance creativity and efficiency, leading to higher open rates, fewer unsubscribes, and more campaign funds.’
Campaign staff and politicians often base their narratives upon the reactions and stances of their electorate. Public opinion is quintessential in the democratic process, especially in creating a candidate's program and giving them cues on how to approach certain social issues. However, recently there has occurred a phenomenon of ‘astroturfing’, which is an artificially manufactured political movement created to simulate the impression of genuine grassroots activism. Actors involved in such activities aim to form a perception of a broad social consensus on specific issues. The danger coming from astroturfing is big already, yet it is exacerbated by the potential of generative artificial intelligence.
A great verification of these themes is an experiment conducted by Sarah Kreps and Douglas L. Kriner from Cornell University. They send more than 32,000 emails both human-made and generated by OpenAI’s GPT-3 to roughly 7,200 state legislators in 6 policy issue areas. Their analysis suggested that lawmakers perceived the AI-generated messages almost as credible as the ones constructed by humans. Although large language models oftentimes produce bland and even inaccurate content, it is important to acknowledge that they constantly improve, e.g. through reinforcement learning from human feedback. As feedback from the people remains a vital source of information for policymakers and politicians, AI-powered astroturfing might greatly influence their behaviour. A malicious design of bogus public opinion blends the frontier between what is real and what is not.
Generative AI presents both opportunities and significant challenges for the future of political campaigns. This new technology can highly empower campaign staff and optimize a lot of processes. However, it also has the potential to create a lot of chaos. Candidates can use it to distort reality and create a false representation of the truth. They had been doing it before as well, albeit now generative AI enables it to happen at a much larger scale and a much faster pace. 2024 is likely to bring more of the aforementioned strategies and operations to the spotlight in the campaign circles. We can expect more disinformation, more artificially generated content, and perhaps more engagement with the voters, without knowing if there is anyone on the candidate’s side. The implications of these innovations are profound and as they continue to advance, politicians and democracies worldwide must navigate its impact with vigilance.