AI-generated fake news is coming to an election near you


Many years before ChatGPT was released, my research group, the University of Cambridge Social Decision Making Laboratory, wondered if it was possible to make neural networks generate wrong information. To achieve this, we trained ChatGPT’s predecessor, GPT-2, on examples of popular conspiracy theories and then asked it to generate fake news for us. It has given us thousands of misleading but credible news stories. A few examples: “Certain vaccines are loaded with dangerous chemicals and toxins,” and “Government officials manipulated stock prices to hide scandals.” The question was, would anyone believe these claims?

We created the first psychometric tool to test this hypothesis, which we called the Misinformation Susceptibility Test (MIST). In collaboration with YouGov, we used the AI-generated headlines to test how susceptible Americans are to AI-generated fake news. The results were alarming: 41 percent of Americans mistakenly thought the vaccine headline was true, and 46 percent thought the government was manipulating the stock market. Another recent study, published in the journal Scienceshowed not only that GPT-3 produces more compelling disinformation than humans, but also that humans cannot reliably distinguish between human and AI-generated misinformation.

My prediction for 2024 is that AI-generated misinformation will come to an election near you, and you probably won’t even realize it. In fact, you may have already been exposed to some examples. In May 2023, a viral fake story about a bombing at the Pentagon was accompanied by an AI-generated image that showed a large cloud of smoke. This caused public uproar and even a drop in the stock market. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between fact and fiction, and use AI to bolster their political attacks.

Before the explosion of generative AI, cyber propaganda firms around the world had to write misleading messages themselves and employ human troll factories to target people on a large scale. With the help of AI, the process of generating misleading news headlines can be automated and armed with minimal human intervention. For example, micro-targeting – the practice of targeting people with messages based on digital tracking data, such as their Facebook likes – has already been a concern in previous elections, despite the biggest obstacle being the need to send out hundreds of variants of the same message to generate to see what works on a given group of people. What was once labor intensive and expensive is now cheap and readily available without any barrier to entry. AI has effectively democratized the creation of disinformation: anyone with access to a chatbot can now view the model on a specific topic, whether it’s immigration, gun control, climate change, or LGBTQ+ issues, and generate dozens of highly persuasive fake news stories in minutes. In fact, hundreds of AI-generated news sites are already popping up and spreading fake stories and videos.

To test the impact of such AI-generated disinformation on people’s political preferences, researchers from the University of Amsterdam created a deeply fake video of a politician offending his religious voter base. For example, in the video the politician joked: “As Christ would say, don’t crucify me for it.” The researchers found that religious Christian voters who watched the deeply fake video had more negative attitudes towards the politician than those in the control group.

It’s one thing to mislead people with AI-generated disinformation in experiments. It is something else to experiment with our democracy. In 2024 we will see more deep fakes, vote cloning, identity manipulation and AI produced fake news. Governments will severely limit—if not ban—the use of AI in political campaigns. Because if they don’t, AI will undermine democratic elections.

Leave a Reply

Your email address will not be published. Required fields are marked *