In a time of advanced technology, concerns about artificial intelligence (AI) being weaponized to manipulate elections have been on the rise.
Many feared that AI would flood social media with deepfakes and disinformation during the world’s busiest election year in 2024.
However, Meta, the company behind Facebook, Instagram, and WhatsApp, says those fears did not come to life, writes Digi24.
Nick Clegg, Meta's president of global affairs and a former UK deputy prime minister, shared insights during a recent briefing.
39 cases
According to him, Russia remains the leading source of harmful online activities. Since 2017, Meta has disrupted 39 covert influence networks originating from Russia.
Despite this, Clegg noted a surprising trend. AI-generated disinformation campaigns were less impactful than anticipated in the lead-up to major elections.
Meta took down around 20 covert influence operations globally in 2024. These operations often involved fake accounts and fabricated news sites designed to manipulate public opinion.
One of the most notable campaigns was a Russian network targeting people in Georgia, Armenia, and Azerbaijan.
Another effort used AI-generated content to create fake news sites styled after brands like Fox News and the Telegraph.
These sites attempted to undermine Western support for Ukraine and spread pro-Russia narratives in Africa.
While AI didn’t play a significant role in direct election interference, Meta did act against attempts to misuse its tools.
In the month before the U.S. elections, over 500,000 requests were blocked for generating fake images of political figures like Donald Trump, Joe Biden, Kamala Harris, and others.
Clegg cautioned that the relatively low impact of AI-driven manipulation in 2024 shouldn’t lead to complacency.
He predicted that generative AI tools would become more sophisticated and widespread in the near future.
Other experts echoed this warning. A recent report by the Center for Emerging Technologies and Security found that AI amplified existing disinformation during the U.S. elections.
This included false claims about Kamala Harris and xenophobic memes involving Haitian immigrants.
While these incidents didn’t directly alter election outcomes, they showcased how AI can subtly influence public discourse.
Looking ahead to elections in countries like Australia and Canada, experts stressed the need for vigilance against AI-driven threats.
Though its current role is modest, the potential for AI to disrupt democratic systems is growing.