AI is spreading misinformation faster than ever before.
With AI and user-friendly large language models evolving faster than global laws and businesses can keep up, experts warn that the greatest threat facing humanity today may be the spread of false or distorted information weaponized by artificial intelligence.
To address the growing international concern over misinformation, the European Union passed the Digital Services Act in 2022 to rein in major tech companies, propelling the spread of lies online. Concerns over the threat posed by AI-driven misinformation have only escalated since.
In its Global Risks Report released in January earlier this year, the World Economic Forum labeled misinformation produced by AI as the most pressing threat to society and the global economy in the short term. Ranked above a laundry list of growing environmental risks, experts fear an overabundance of misinformation in an age affixed to it could spell the erosion of democracy as we know it.
Legal efforts to combat the rise of AI-driven misinformation are disorganized and diffuse. As of February, there were 407 different “AI-related” bills in U.S. state legislatures alone.
On August 9, a bipartisan task force from the American Bar Association also singled out “the rise of misinformation” as one of the “key issues undermining our democratic institutions,” the ABA said in a statement at the group’s annual meeting in Chicago.
It’s never been easier to be exposed to the dangers of misinformation. It takes less than an hour for misinformation to reach most new social media users. The average TikTok user is exposed to misinformation about the war in Ukraine within 40 minutes of signing up, according to a recent report from fact-checking outfit NewsGuard.
Just as quickly as concerns around this digital threat materialize, misinformation remains one step ahead of human intervention.
Experts fear an overabundance of misinformation in an age affixed to it could spell the erosion of democracy as we know it."
Twenty-five thousand AI and tech enthusiasts from 145 different countries gathered in Geneva, Switzerland, in June for the AI for Good Summit to voice their growing concerns with the threats posed by AI and misinformation. Experts emphasized the evolving challenges surrounding deepfakes—synthetic media procured using a person's convincing likeness in place of another—calling for a system of checks and balances over the all-encompassing tech.
Organized by the United Nations and held intermittently since 2017, the event sets a benchmark for AI-related coverage and promotes the technology’s advancements in health care, climate and sustainability efforts, and several other impactful, globally-spanning initiatives.
Frederic Werner, Head of Strategic Engagement at the UN International Telecommunication Union (ITU), underscored the crucial need for risk mitigation standards to combat the spread of AI-driven misinformation and deepfakes tactics.
As technology advances, new challenges emerge. The Department of Defense (DOD) issued public misinformation warnings this year that the user-friendly nature of AI will inevitably lead to global conflicts on virtual battlefields.
According to the DOD, as AI-backed tech and misinformation continue to advance, building more robust, intuitive and even militarized online strategies based on human characteristics, these “dynamic wars” currently don’t fall under any governable legal parameter with means to prevent them.
In July, the U.S. Justice Department said it actively disrupted numerous Russian misinformation operations online, all utilizing social media accounts enhanced by AI. The alleged Kremlin-funded operation began in early 2023 and was carried out through an umbrella network of private organizations tasked to design a private, custom AI-powered platform, churning out fake social media accounts made to resemble either Ukrainian or American tech users.
The accounts, which primarily posted pro-Russian messages criticizing the Ukrainian government, are in the process of being banned from X and serve as one tool in the country’s arsenal to contort public discourse surrounding the conflict. Though technology hasn’t always held pace with human ambition, conflict thrives off misinformation. AI has the potential to stoke disharmony in ways yet unseen and easily spread lies on a worldwide scale.
X, Elon Musk's most recent creative acquisition, could face a series of significant fines due to its alleged failure to monitor and ban dangerous or illegal misinformation related to the ongoing wars in Ukraine and Gaza.
This is only the latest move in an ongoing Digital Service Act (DSA) crackdown led by the EU against major tech companies propelling misinformation forward.
Last December, the European Commission announced an investigative probe into X’s alleged failure to properly police illegal and dangerous misinformation. The commission is due to issue a set of formal charges in the coming weeks. If action is taken, the social media platform could face fines equal to 6% of its yearly revenue, roughly $264 million.
Passed in 2022, the DSA set rules to govern and monitor the online behavior of tech firms known as internet “gatekeepers.” At its core, the act regulates all online intermediaries and platforms such as marketplaces, social network sites, content-sharing platforms, app stores and online travel and accommodation platforms, with its main goal to crack down on the spread of misinformation. This legal groundwork overseas could set the stage for future federal legislation against AI usage in the U.S.
Meanwhile, the Chinese government remains vocal about its online ambitions with AI. Before the advent of widespread AI, China utilized its own companies’ technologies to flood global markets, securing access points into competing, foreign political systems.
This strategically coordinated attack is already in effect, as China continues to attach its AI-based tech to foreign Internet stacks. Since 2010, China has carried out multiple attempts to deploy telecom network bugs throughout the U.S. to extract personal consumer data. By deploying millions of routers via TP-Link or other global service providers, China covertly exposes vulnerabilities in America’s technological armor, conducting widespread consumer surveillance through social media platforms and corresponding apps.
Labeled the most significant global conflict since the emergence of smartphones, laptops and social media, Russian-led misinformation efforts driven by AI in Ukraine have migrated to TikTok. Since the U.S. Justice Department’s discovery in July, over 800 AI-enhanced accounts have been removed, revealing an additional 12,000 fake accounts originating from Russian IP addresses.
AI has the potential to stoke disharmony in ways yet unseen and easily spread lies on a worldwide scale."
Misinformation tactics shaping the Ukraine conflict continue to evolve faster every day. Everything from recycled videos from old conflicts to real videos presented in misleading ways, with fake gunshots and explosions added in for extra authenticity, is quickly spreading across TikTok, the BBC’s disinformation team recently revealed to Agence France-Presse.
Some fake users even repurpose old footage from past global conflicts or video games, pretending to report on the ground in Ukraine, and asking for donations to support their purportedly authentic journalism. Little do viewers know what they’re actually scrolling into.
As the U.S. presidential election approaches in November, the threat of AI spreading discord and misinformation has tech and security experts preparing for the worst. With the methods and systems to create misinformation only increasing in scope and intensity, the measures put in place to once counter false election claims shift closer to disarray rather than defense. Generative AI tools have made spreading misinformation easier and with the very real potential to influence the outcome of elections.
Both hacking attempts and covertly run social media campaigns are expected to disrupt the election in some form. Just weeks ahead of the election, we’re already seeing that impact.
For the third consecutive presidential election, foreign hacking has targeted U.S. campaigns, with Iran taking the lead this time. On August 11, Microsoft reported that an Iranian hacking group linked to Iran’s Islamic Revolutionary Guard Corps breached the account of a former senior adviser to a presidential campaign. Utilizing this access, they sent spear-phishing emails to a high-ranking campaign official in an attempt to infiltrate the campaign’s accounts and databases.
Former President Donald J. Trump announced that Microsoft informed his campaign about the hack, attributing it to a “Weak and Ineffective” Biden administration, although the hackers accessed only publicly available information. The full extent of the breach remains unclear, and it’s uncertain if the Iranian group, identified as Mint Sandstorm by Microsoft, achieved any significant penetration. Trump’s campaign blamed “foreign sources hostile to the United States” for a leak of internal documents reported by Politico, though it’s unclear if these were related to the Iranian efforts or an internal leak.
Investigators suggest Iran aims to see Trump defeated due to his withdrawal from the 2015 nuclear deal, the reimposition of sanctions and the killing of Maj. Gen. Qassim Suleimani. Tom Burt, Microsoft’s head of customer security, confirmed the breach but did not specify if Trump’s campaign was the target, adhering to the company’s policy of revealing details only with the victim’s permission. This incident echoes the 2016 election, where Russian hackers used similar "hack and leak" tactics to expose internal Democratic communications, impacting Hillary Clinton’s campaign.
Iranian hackers allegedly struck again just five days after the Trump incident. On August 16, OpenAI announced it had identified and thwarted another, separate influence campaign leveraging the company's generative AI technology to disseminate misinformation online, notably concerning the U.S. presidential election.
OpenAI published an eye-opening report in May revealing it had identified and thwarted five separate online campaigns orchestrated by state actors and private entities in Russia, China, Israel and Iran, which leveraged OpenAI’s advanced technology for the deceptive manipulation of public opinion and geopolitical influence.
These campaigns included generating social media content, translating and editing articles, crafting headlines and debugging software, all aimed at garnering support for political causes or influencing public sentiment in global conflicts.
U.S. lawmakers have tried to avert the spread of AI-driven misinformation internally by instituting a TikTok platform ban in 2022, restricting all government employees from accessing TikTok on work devices.
Whether this ban will eventually materialize into a real-world threat is unclear. Still, one thing remains certain: social media is feeling the legal backlash of digital warfare now more than ever.
While many experts fear AI’s future untapped potential for harm, some believe AI could ironically be the only solution to limit the spread of misinformation.
Dave Willner, former head of trust and safety at OpenAI, suggests utilizing generative AI programs as “virtual hall monitors” to identify and remove false information from online platforms in an instant. Microsoft and Amazon are among those currently exploring independent options to build smaller, less expensive AI models for content moderation and hate speech detection.
The double-edged sword of generative AI can cut deep, leaving complex wounds that linger long after the attack. From a cost-saving perspective, most, if not all, major social media platforms will likely invest in a content moderation approach in the future to guard themselves against the coming wave of AI-driven misinformation. Companies fielding this new tech should err on the side of caution, proceeding with total transparency rather than contributing to the AI confusion.
Defending against the spread of misinformation produced by AI has reached an inflection point both online and in courtrooms around the world. With lawmakers and tech regulators overwhelmed by the sheer volume and complexity of false or misleading content and struggling to keep pace with constantly evolving ethical and legal dilemmas, it’s imperative we all engage in this critical fight to safeguard our digital future.
Gregory Sirico is a journalist and editor at Best Lawyers, regularly contributing to both regional publications and business editions. In 2020, he graduated from Ramapo College of New Jersey with a B.A. in Journalism. During his time there, Sirico served as an editorial assistant at his local publication, the Asbury Park Press, in addition to working as an Organizing and Communications Fellow for the Biden Presidential Campaign.