The Rising Tide of Disinformation
In an age where social media has become the town square of global discourse, its role in shaping political narratives cannot be overstated. As recent events have shown, the insidious spread of disinformation through these platforms continues to threaten democracies worldwide. I’ve previously addressed this issue (HERE), but it’s worth revisiting as the problem has worsened. Disinformation now influences global events and politics, and it subtly shapes individual thinking, which, in turn, impacts political landscapes. This pervasive reach affects almost everyone, often without their awareness. I am seeing this increasingly in my friends, colleagues and aquaintances.
In the lead-up to the U.S. presidential election, BrandeisNOW initiated a series of expert analyses, highlighting crucial issues that are shaping the country’s future. A central theme that emerged was the pervasive and evolving nature of disinformation campaigns, fuelled by advances in digital technologies and geopolitical tensions.
The Russian Disinformation Playbook
Central to this discussion is the role of Russian pseudo-state operations. Entities like the Internet Research Agency, financially backed by Putin’s allies and operating out of St. Petersburg, have been instrumental in manipulating public opinion globally. These operations, often employing “troll farms” and automated bots, and increasingly now including AI-generated images and videos, craft and disseminate disinformation, targeting not just the United States but regions worldwide, including South America, Africa, and especially the former Soviet Bloc countries like Ukraine. More recently Russian interference via social media has been uncovered in Romanian and Moldovan elections.
Targeting the United States: A Cacophony of Campaigns
The United States has not been immune to these tactics. In the run-up to the 2016 presidential elections, a myriad of campaigns—ranging from pro-Trump propaganda to hoaxes about Thanksgiving turkeys, anti-vaccination rhetoric, and even organising both pro-Trump and anti-Trump protests—have illustrated the diverse and often contradictory nature of these efforts. This strategy seems less about convincing and more about creating a cacophony of noise, blurring lines between truth and fiction and creating chaos and division.
Targeting Australia: Russian Influence in Australian Indigenous Rights Referendum
Even in Australia, smaller and further away, Russian disinformation efforts have manifested through individuals like Simeon Boikov, known as the “Aussie Cossack.” Operating from the Russian consulate in Sydney (where he fled the federal police who want him on assault charges), Boikov has leveraged social media platforms to disseminate anti-Western rhetoric and conspiracy theories. During Australia’s Indigenous rights referendum, he actively promoted the “No” campaign, organising rallies and spreading misleading information to influence public opinion. Experts suggest that such activities align with broader Russian strategies to exploit societal divisions and undermine democratic processes in Western countries.
The Propaganda Machine: From Noise to Distrust
This strategy was starkly visible in the Russian internal propaganda surrounding events like the downing of a Malaysian Airlines flight over Ukraine in 2014. Multiple fictional narratives were pushed, fostering a climate where the public starts to question the very existence of objective truth. The end goal: erode trust in institutions, media, and fellow citizens, thereby weakening the fabric of democracy. Russia financed campaigns to sow distrust and spread disinformation about Ukraine and events in Ukraine in the lead up to the invasion so as to influence the reaction of other countries.
US Senate Reports: Unveiling the Extent of Russian Influence– Christians and minorities targeted
Recent reports for the US Senate, including those from Oxford University’s Computational Propaganda Project and social media research firm New Knowledge, offer comprehensive insights into Russia’s tactics. These reports trace the origins of Russian online influence, highlighting the tailored messages based on geography, political interests, race, and religion to manipulate American voters, with Christians and minorities known to be particularly targeted. A few years ago, an investigation found that the top 19 Christian Facebook groups in the USA were actually run by Russian-funded ‘troll farms’.[1]
John Mark Dougan, NewsGuard’s 2024 Disinformer of the Year, has become a key figure in Russia’s disinformation machine, generating false narratives that reached over 67 million views and influenced U.S. congressional debates. He is known to particularly target conservative audiences and disillusioned populations. A former Florida deputy sheriff turned fugitive in Moscow, Dougan operates as part of a Russian influence operation linked to the GRU, creating fake local news sites and AI-generated content. His fabricated stories included baseless claims of Ukrainian bioweapons labs and false accusations of sexual abuse against U.S. politicians, with these narratives being cited by figures such as Rep. Marjorie Taylor Greene and Vice President-elect J.D. Vance. Despite publicly denying Kremlin ties, Dougan has appeared alongside Russian officials and boasted about influencing U.S. politics, even claiming credit for swaying Congress to defund Ukraine. As his network targets Germany’s upcoming elections, Dougan’s methods—AI-generated videos, staged whistleblowers, and fake news sites—show how disinformation campaigns can manipulate public opinion and destabilize democracies on a global scale.
The Complicity of Tech Platforms
A significant concern highlighted in these reports is the passive role played by tech companies in countering these influence operations. The delayed response of these platforms is a crucial factor in the success of such disinformation campaigns. Some platforms are active participants in Russian disinformation.
Russia’s Domestic Propaganda: A Tool for Control
Within Russia, the state-controlled media serves as a potent tool for domestic propaganda, fuelling military hysteria and creating a perceived need for internal and external enemies. This methodology, reminiscent of historical propaganda techniques, is used as a weapon of war, controlling the populace and supporting the government’s agendas.
The Evolution of AI-Driven Disinformation
The rise of generative AI technologies has revolutionised the production of disinformation. Tools capable of creating realistic images, videos, and audio—commonly known as deepfakes—have become increasingly accessible. This accessibility allows malicious actors to produce deceptive content at scale, undermining public trust in media and institutions. A 2024 survey highlighted the dual role of generative AI in both creating and detecting fake news, emphasising the need for comprehensive strategies to address this issue.
More Recent Incidents Highlighting the Threat
The 2024 U.S. presidential election witnessed sophisticated disinformation campaigns employing AI-generated content. Notably, deepfake videos impersonating public figures emerged in multiple countries, raising concerns about election interference and character defamation. Between July 2023 and July 2024, deepfakes impersonating public figures surfaced in 38 countries, underscoring the global nature of this threat.
In 2024, Romania’s presidential election was annulled after intelligence reports revealed that Russian-backed operatives had used TikTok to promote far-right candidate Călin Georgescu, significantly influencing the election outcome.
In another instance, a deepfake operation targeted U.S. Senator Ben Cardin, where perpetrators impersonated a Ukrainian official during a video call. This incident underscores the growing use of AI by malicious actors to deceive political figures and the public.
Global Disinformation Campaigns
State actors have increasingly leveraged AI to conduct disinformation campaigns. China’s “Spamouflage” network, for example, has used AI-generated content to impersonate American voters on social media platforms, aiming to sow discord ahead of elections. These operations often involve creating fake personas and disseminating divisive narratives to manipulate public opinion.
Similarly, Russia’s disinformation machine has evolved, employing AI to produce counterfeit news sites and social media bots that spread false information. I have written about this before and been a target of fake accounts whilst trying to raise funds for aid in Ukraine on social media.
These tactics are designed to create confusion, erode trust in democratic institutions, and influence political outcomes.
The Role of Social Media Platforms
Social media platforms have become conduits for AI-generated disinformation. The ease with which AI tools can produce and disseminate content has led to an influx of misleading information online. For instance, AI-generated images and videos have flooded platforms, making it challenging for users to distinguish between authentic and fabricated content. This phenomenon, often referred to as “AI slop,” underscores the need for improved content moderation and user awareness. AI generated content and in particular images appear to have been used in pro-Hamas, pro-Gaza campaigns among others.
Countermeasures and Policy Responses
Addressing the challenges posed by AI-driven disinformation requires a multifaceted approach. Governments and organisations are implementing policies to counteract these threats. The European Union, for example, has introduced the Digital Services Act to scrutinise data related to electoral influence and hold platforms accountable.
Additionally, the U.S. has imposed sanctions on entities involved in AI-generated election disinformation, signalling a commitment to combating foreign interference in democratic processes.
The Path Forward
As AI technology continues to advance, the potential for its misuse in disinformation campaigns grows. It is imperative for policymakers, technology companies, and civil society to collaborate in developing robust strategies to detect and counteract AI-generated disinformation. Enhancing digital literacy among the public, investing in detection technologies, and enforcing transparent policies are crucial steps toward safeguarding democratic institutions against the rising tide of AI-driven disinformation. Many of these things are out of our control, but in the meantime, especially those in target groups such as Christians, minority groups or influential areas in elections, we can be wise and consider carefully what we believe and share.
Love to hear your comments!