Insight

Misinformation Age

As AI weaponizes lies, can global law, corporate business practices and legal associates adapt to the growing threat?

Animated hands shifting sides of a message cube
GS

Gregory Sirico

October 21, 2024 12:00 AM

AI is spreading misinformation faster than ever before.

With AI and user-friendly large language models evolving faster than global laws and businesses can keep up, experts warn that the greatest threat facing humanity today may be the spread of false or distorted information weaponized by artificial intelligence.

To address the growing international concern over misinformation, the European Union passed the Digital Services Act in 2022 to rein in major tech companies, propelling the spread of lies online. Concerns over the threat posed by AI-driven misinformation have only escalated since.

In its Global Risks Report released in January earlier this year, the World Economic Forum labeled misinformation produced by AI as the most pressing threat to society and the global economy in the short term. Ranked above a laundry list of growing environmental risks, experts fear an overabundance of misinformation in an age affixed to it could spell the erosion of democracy as we know it.

Legal efforts to combat the rise of AI-driven misinformation are disorganized and diffuse. As of February, there were 407 different “AI-related” bills in U.S. state legislatures alone.

On August 9, a bipartisan task force from the American Bar Association also singled out “the rise of misinformation” as one of the “key issues undermining our democratic institutions,” the ABA said in a statement at the group’s annual meeting in Chicago.

It’s never been easier to be exposed to the dangers of misinformation. It takes less than an hour for misinformation to reach most new social media users. The average TikTok user is exposed to misinformation about the war in Ukraine within 40 minutes of signing up, according to a recent report from fact-checking outfit NewsGuard.

Just as quickly as concerns around this digital threat materialize, misinformation remains one step ahead of human intervention.

Experts fear an overabundance of misinformation in an age affixed to it could spell the erosion of democracy as we know it."

Twenty-five thousand AI and tech enthusiasts from 145 different countries gathered in Geneva, Switzerland, in June for the AI for Good Summit to voice their growing concerns with the threats posed by AI and misinformation. Experts emphasized the evolving challenges surrounding deepfakes—synthetic media procured using a person's convincing likeness in place of another—calling for a system of checks and balances over the all-encompassing tech.

Organized by the United Nations and held intermittently since 2017, the event sets a benchmark for AI-related coverage and promotes the technology’s advancements in health care, climate and sustainability efforts, and several other impactful, globally-spanning initiatives.

Frederic Werner, Head of Strategic Engagement at the UN International Telecommunication Union (ITU), underscored the crucial need for risk mitigation standards to combat the spread of AI-driven misinformation and deepfakes tactics.

As technology advances, new challenges emerge. The Department of Defense (DOD) issued public misinformation warnings this year that the user-friendly nature of AI will inevitably lead to global conflicts on virtual battlefields.

According to the DOD, as AI-backed tech and misinformation continue to advance, building more robust, intuitive and even militarized online strategies based on human characteristics, these “dynamic wars” currently don’t fall under any governable legal parameter with means to prevent them.

In July, the U.S. Justice Department said it actively disrupted numerous Russian misinformation operations online, all utilizing social media accounts enhanced by AI. The alleged Kremlin-funded operation began in early 2023 and was carried out through an umbrella network of private organizations tasked to design a private, custom AI-powered platform, churning out fake social media accounts made to resemble either Ukrainian or American tech users.

The accounts, which primarily posted pro-Russian messages criticizing the Ukrainian government, are in the process of being banned from X and serve as one tool in the country’s arsenal to contort public discourse surrounding the conflict. Though technology hasn’t always held pace with human ambition, conflict thrives off misinformation. AI has the potential to stoke disharmony in ways yet unseen and easily spread lies on a worldwide scale.

X, Elon Musk's most recent creative acquisition, could face a series of significant fines due to its alleged failure to monitor and ban dangerous or illegal misinformation related to the ongoing wars in Ukraine and Gaza.

This is only the latest move in an ongoing Digital Service Act (DSA) crackdown led by the EU against major tech companies propelling misinformation forward.

Last December, the European Commission announced an investigative probe into X’s alleged failure to properly police illegal and dangerous misinformation. The commission is due to issue a set of formal charges in the coming weeks. If action is taken, the social media platform could face fines equal to 6% of its yearly revenue, roughly $264 million.

Passed in 2022, the DSA set rules to govern and monitor the online behavior of tech firms known as internet “gatekeepers.” At its core, the act regulates all online intermediaries and platforms such as marketplaces, social network sites, content-sharing platforms, app stores and online travel and accommodation platforms, with its main goal to crack down on the spread of misinformation. This legal groundwork overseas could set the stage for future federal legislation against AI usage in the U.S.

Meanwhile, the Chinese government remains vocal about its online ambitions with AI. Before the advent of widespread AI, China utilized its own companies’ technologies to flood global markets, securing access points into competing, foreign political systems.

This strategically coordinated attack is already in effect, as China continues to attach its AI-based tech to foreign Internet stacks. Since 2010, China has carried out multiple attempts to deploy telecom network bugs throughout the U.S. to extract personal consumer data. By deploying millions of routers via TP-Link or other global service providers, China covertly exposes vulnerabilities in America’s technological armor, conducting widespread consumer surveillance through social media platforms and corresponding apps.

Labeled the most significant global conflict since the emergence of smartphones, laptops and social media, Russian-led misinformation efforts driven by AI in Ukraine have migrated to TikTok. Since the U.S. Justice Department’s discovery in July, over 800 AI-enhanced accounts have been removed, revealing an additional 12,000 fake accounts originating from Russian IP addresses.

AI has the potential to stoke disharmony in ways yet unseen and easily spread lies on a worldwide scale."

Misinformation tactics shaping the Ukraine conflict continue to evolve faster every day. Everything from recycled videos from old conflicts to real videos presented in misleading ways, with fake gunshots and explosions added in for extra authenticity, is quickly spreading across TikTok, the BBC’s disinformation team recently revealed to Agence France-Presse.

Some fake users even repurpose old footage from past global conflicts or video games, pretending to report on the ground in Ukraine, and asking for donations to support their purportedly authentic journalism. Little do viewers know what they’re actually scrolling into.

As the U.S. presidential election approaches in November, the threat of AI spreading discord and misinformation has tech and security experts preparing for the worst. With the methods and systems to create misinformation only increasing in scope and intensity, the measures put in place to once counter false election claims shift closer to disarray rather than defense. Generative AI tools have made spreading misinformation easier and with the very real potential to influence the outcome of elections.

Both hacking attempts and covertly run social media campaigns are expected to disrupt the election in some form. Just weeks ahead of the election, we’re already seeing that impact.

For the third consecutive presidential election, foreign hacking has targeted U.S. campaigns, with Iran taking the lead this time. On August 11, Microsoft reported that an Iranian hacking group linked to Iran’s Islamic Revolutionary Guard Corps breached the account of a former senior adviser to a presidential campaign. Utilizing this access, they sent spear-phishing emails to a high-ranking campaign official in an attempt to infiltrate the campaign’s accounts and databases.

Former President Donald J. Trump announced that Microsoft informed his campaign about the hack, attributing it to a “Weak and Ineffective” Biden administration, although the hackers accessed only publicly available information. The full extent of the breach remains unclear, and it’s uncertain if the Iranian group, identified as Mint Sandstorm by Microsoft, achieved any significant penetration. Trump’s campaign blamed “foreign sources hostile to the United States” for a leak of internal documents reported by Politico, though it’s unclear if these were related to the Iranian efforts or an internal leak.

Investigators suggest Iran aims to see Trump defeated due to his withdrawal from the 2015 nuclear deal, the reimposition of sanctions and the killing of Maj. Gen. Qassim Suleimani. Tom Burt, Microsoft’s head of customer security, confirmed the breach but did not specify if Trump’s campaign was the target, adhering to the company’s policy of revealing details only with the victim’s permission. This incident echoes the 2016 election, where Russian hackers used similar "hack and leak" tactics to expose internal Democratic communications, impacting Hillary Clinton’s campaign.

Iranian hackers allegedly struck again just five days after the Trump incident. On August 16, OpenAI announced it had identified and thwarted another, separate influence campaign leveraging the company's generative AI technology to disseminate misinformation online, notably concerning the U.S. presidential election.

OpenAI published an eye-opening report in May revealing it had identified and thwarted five separate online campaigns orchestrated by state actors and private entities in Russia, China, Israel and Iran, which leveraged OpenAI’s advanced technology for the deceptive manipulation of public opinion and geopolitical influence.

These campaigns included generating social media content, translating and editing articles, crafting headlines and debugging software, all aimed at garnering support for political causes or influencing public sentiment in global conflicts.

U.S. lawmakers have tried to avert the spread of AI-driven misinformation internally by instituting a TikTok platform ban in 2022, restricting all government employees from accessing TikTok on work devices.

Whether this ban will eventually materialize into a real-world threat is unclear. Still, one thing remains certain: social media is feeling the legal backlash of digital warfare now more than ever.

While many experts fear AI’s future untapped potential for harm, some believe AI could ironically be the only solution to limit the spread of misinformation.

Dave Willner, former head of trust and safety at OpenAI, suggests utilizing generative AI programs as “virtual hall monitors” to identify and remove false information from online platforms in an instant. Microsoft and Amazon are among those currently exploring independent options to build smaller, less expensive AI models for content moderation and hate speech detection.

The double-edged sword of generative AI can cut deep, leaving complex wounds that linger long after the attack. From a cost-saving perspective, most, if not all, major social media platforms will likely invest in a content moderation approach in the future to guard themselves against the coming wave of AI-driven misinformation. Companies fielding this new tech should err on the side of caution, proceeding with total transparency rather than contributing to the AI confusion.

Defending against the spread of misinformation produced by AI has reached an inflection point both online and in courtrooms around the world. With lawmakers and tech regulators overwhelmed by the sheer volume and complexity of false or misleading content and struggling to keep pace with constantly evolving ethical and legal dilemmas, it’s imperative we all engage in this critical fight to safeguard our digital future.

Gregory Sirico is a journalist and editor at Best Lawyers, regularly contributing to both regional publications and business editions. In 2020, he graduated from Ramapo College of New Jersey with a B.A. in Journalism. During his time there, Sirico served as an editorial assistant at his local publication, the Asbury Park Press, in addition to working as an Organizing and Communications Fellow for the Biden Presidential Campaign.

Headline Image: iStock/Moor Studio

Related Articles

Treacherous Waters, Uncharted Territory


by Bryan Driscoll

Political shifts around the globe this year are forcing international law and business to navigate a more intricate compliance landscape

Man in suit with telescope stands on deserted boat

Critical Period


by Maryne Gouhier and Armelle Royer

How the green-energy raw materials chase is rewriting geopolitics

Overhead shot of mineral extraction plant

The Human Cost


by Justin Smulison

2 new EU laws aim to reshape global business by enforcing ethical supply chains, focusing on human rights and sustainability

Worker wearing hat stands in field carrying equipment

Family Law Wrestles With Ethics as It Embraces Technology


by Michele M. Jochner

Generative AI is revolutionizing family law with far-reaching implications for the practice area.

Microchip above animated head with eyes closed

Best Lawyers Expands With New Artificial Intelligence Practice Area


by Best Lawyers

Best Lawyers introduces Artificial Intelligence Law to recognize attorneys leading the way in AI-related legal issues and innovation.

AI network expanding in front of bookshelf

New Mass. Child Custody Bills Could Transform US Family Law


by Gregory Sirico

How new shared-parenting child custody bills may reshape family law in the state and set a national precedent.

Two children in a field holding hands with parents

The Future of Family Law: 3 Top Trends Driving the Field


by Gregory Sirico

How technology, mental health awareness and alternative dispute resolution are transforming family law to better support evolving family dynamics.

Animated child looking at staircase to beach scene

Struggling to Attract Clients? Discover Small Law Firm Marketing Strategies That Work


by Jennifer Verta

Recognize what is holding your law firm back.

A glowing light bulb surrounded by a crowd of miniature figures

ESG Backlash on the Border


by Bryan Driscoll

A warning and opportunity for Canadian business and law.

Three figures stand in forest with refinery ahead

Safe Drinking Water Is the Law, First Nations Tell Canada in $1.1B Class Action


by Gregory Sirico

Canada's argument that it has "no legal obligation" to provide First Nations with clean drinking water has sparked a major human rights debate.

Individual drinking water in front of window

7 Key Steps to Successful Social Media Campaigns for Lawyers


by Jamilla Tabbara

Effective strategies to boost your law firm’s social media presence and client engagement.

Red icons with hearts and the number one, symbolizing online interactions.

The Future of Canadian Law. Insights from Best Lawyers: Ones to Watch Honorees


by Jennifer Verta

Emerging leaders in Canada share their perspectives on the challenges and opportunities shaping the future of Canadian law

Digital eye with futuristic overlays, symbolizing legal innovation and technology

Breaking Down Bar Association Compliance in Legal Marketing


by Jamilla Tabbara

Ensure your legal marketing practices meet ABA compliance standards to build trust, attract clients and avoid penalties.

Magnifying glass over a ribbon icon, representing legal compliance

Paramount Hit With NY Class Action Lawsuit Over Mass Layoffs


by Gregory Sirico

Paramount Global faces a class action lawsuit for allegedly violating New York's WARN Act after laying off 300+ employees without proper notice in September.

Animated man in suit being erased with Paramount logo in background

Discover The Best Lawyers in Spain 2025 Edition


by Jennifer Verta

Highlighting Spain’s leading legal professionals and rising talents.

Flags of Spain, representing Best Lawyers country

Crafting Engaging Legal Infographics to Boost Client Engagement


by Jamilla Tabbara

Explore the power of legal infographics to simplify, educate and engage clients while enhancing your firm's online presence.

Abstract illustration featuring charts, graphs and figures incorporating legal infographics

Trending Articles

2025 Best Lawyers Awards Announced: Honoring Outstanding Legal Professionals Across the U.S.


by Jennifer Verta

Introducing the 31st edition of The Best Lawyers in America and the fifth edition of Best Lawyers: Ones to Watch in America.

Digital map of the United States illuminated by numerous bright lights.

Unveiling the 2025 Best Lawyers Awards Canada: Celebrating Legal Excellence


by Jennifer Verta

Presenting the 19th edition of The Best Lawyers in Canada and the 4th edition of Best Lawyers: Ones to Watch in Canada.

Digital map of Canadathis on illuminated by numerous bright lights

Discover The Best Lawyers in Spain 2025 Edition


by Jennifer Verta

Highlighting Spain’s leading legal professionals and rising talents.

Flags of Spain, representing Best Lawyers country

Unveiling the 2025 Best Lawyers Editions in Brazil, Mexico, Portugal and South Africa


by Jennifer Verta

Best Lawyers celebrates the finest in law, reaffirming its commitment to the global legal community.

Flags of Brazil, Mexico, Portugal and South Africa, representing Best Lawyers countries

Presenting the 2025 Best Lawyers Editions in Chile, Colombia, Peru and Puerto Rico


by Jennifer Verta

Celebrating top legal professionals in South America and the Caribbean.

Flags of Puerto Rico, Chile, Colombia, and Peru, representing countries featured in the Best Lawyers

Prop 36 California 2024: California’s Path to Stricter Sentencing and Criminal Justice Reform


by Jennifer Verta

Explore how Prop 36 could shape California's sentencing laws and justice reform.

Illustrated Hands Breaking Chains Against a Bright Red Background

Tampa Appeals Court ‘Sends Clear Message,” Ensuring School Tax Referendum Stays on Ballot


by Gregory Sirico

Hillsborough County's tax referendum is back on the 2024 ballot, promising $177 million for schools and empowering residents to decide the future of education.

Graduation cap in air surrounded by pencils and money

Find the Best Lawyers for Your Needs


by Jennifer Verta

Discover how Best Lawyers simplifies the attorney search process.

A focused woman with dark hair wearing a green top and beige blazer, working on a tablet in a dimly

Paramount Hit With NY Class Action Lawsuit Over Mass Layoffs


by Gregory Sirico

Paramount Global faces a class action lawsuit for allegedly violating New York's WARN Act after laying off 300+ employees without proper notice in September.

Animated man in suit being erased with Paramount logo in background

The Human Cost


by Justin Smulison

2 new EU laws aim to reshape global business by enforcing ethical supply chains, focusing on human rights and sustainability

Worker wearing hat stands in field carrying equipment

Introduction to Demand Generation for Law Firms


by Jennifer Verta

Learn the essentials of demand gen for law firms and how these strategies can drive client acquisition, retention, and long-term success.

Illustration of a hand holding a magnet, attracting icons representing individuals towards a central

Social Media for Law Firms: The Essential Beginner’s Guide to Digital Success


by Jennifer Verta

Maximize your law firm’s online impact with social media.

3D pixelated thumbs-up icon in red and orange on a blue and purple background.

ERISA Reaches Its Turning Point


by Bryan Driscoll

ERISA litigation and the laws surrounding are rapidly changing, with companies fundamentally rewriting their business practices.

Beach chair and hat in front of large magnify glass

How Client Testimonials Fuel Client Acquisition for Law Firms


by Nancy Lippincott

Learn how client testimonials boost client acquisition for law firms. Enhance credibility, engage clients and stand out in a competitive legal market.

Woman holding blurb of online reviews

Critical Period


by Maryne Gouhier and Armelle Royer

How the green-energy raw materials chase is rewriting geopolitics

Overhead shot of mineral extraction plant

Best Lawyers Expands With New Artificial Intelligence Practice Area


by Best Lawyers

Best Lawyers introduces Artificial Intelligence Law to recognize attorneys leading the way in AI-related legal issues and innovation.

AI network expanding in front of bookshelf