With the release this past March of ChatGPT-4, an open-source generative artificial-intelligence tool, and the evolution of AI across industries of all kinds, legal marketers are beginning to employ these technological advances to increase their firms’ visibility. According to Salesforce’s Fourth Annual State of Marketing research report, surveys indicate that 51% of law firms nationwide have already integrated AI into their marketing and sales strategies, suggesting that AI programs could garner an estimated value of roughly $2.6 trillion.
In addition to enhancing their marketing strategies, there is a noticeable uptrend in the tech-related areas of practice for law firms. The 2024 edition of Best Lawyers: Ones to Watch® in America recorded a significant rise in recognized individuals compared to the third edition, with a 45% increase in Technology Law honorees and a remarkable 124% increase in Privacy and Data Security Law honorees. The awarded lawyers in these specific fields also reflect newly included metropolitan areas in the fourth edition that pertain to these practices. Privacy and Data Security Law now include 15 additional metro areas, while Technology Law adds seven. Notably, Washington D.C. has the highest concentration of honorees in both practice areas.
Historically, the legal industry has always been highly competitive, with marketing strategies typically dominated by over-the-top personalities plastering their likeness on billboards or TV ads. Despite the success of these conventional methods, tech experts anticipate the current AI trend will continue to grow. What exactly is the extent of AI’s impact on the legal industry, and what benefits are expected soon? Let’s dive into the pros, cons and lessons to be learned from the implementation of AI in a firm’s marketing.
The Logical Option
For smaller firms and legal operations, which often have limited funds for investment or lack access to in-house resources, relying on otherwise free AI programs to produce blog posts, web or email copy, online ads and social media posts can be a helpful first step in building a robust marketing strategy. Firms can use AI to draft personalized lawyer communications or schedule automated meetings, boosting productivity and cutting costs while offering a new level of refinement to external-facing content.
Upon its release, ChatGPT quickly captivated the world; it now gets about 1.8 billion visitors a month. Students and Fortune 500 executives alike have used it for countless tasks. Now, just a year after its debut, it’s breaking into the practice of law—albeit not necessarily in the way it was originally intended.
When legal marketing teams wish to produce content quickly for various platforms, AI tools can be a blessing in disguise, as long as the goal isn’t long-term branding and sales positioning. Although AI can bring short-term benefits, firms and companies will likely require a devoted team member to oversee for accuracy the content produced by their AI program of choice.
Limitations to AI Use
At the moment, ChatGPT and most, if not all, AI tools have no access to content in real time, which can lead to unseen limitations in cases where the newest information or statistics are required. Additionally, AI-generated content claims to be original but on several occasions has been flagged for plagiarism—a growing risk for intellectual property violations and raises questions concerning legal compliance.
This is often the result of AI “hallucinations,” the term of art that describes when a chatbot embeds seemingly plausible falsehoods in its content. Hallucinations have become more prominent in the wake of the popularity of so-called “large language models.” Representatives from OpenAI, the company that developed ChatGPT, quickly acknowledged that hallucinations are an expected discrepancy and the most significant limit of its current product; the program now includes a standard disclaimer that the system isn’t always able to distinguish fact from fiction. To avoid such troubles, firms and legal-tech companies producing AI content must avoid sharing sensitive client information when creating marketing materials.
Cautionary Tech Tales
Despite the productivity upsides AI can bring, technological advances can be deceptive. On June 5 of this year, Mark Walters, a radio host in Georgia, filed a defamation suit against OpenAI in Georgia’s Superior Court of Gwinnett County. According to court documents, the open source chatbot was used to draft falsified reports by a third party.
Fredy Riehl, a journalist and editor in chief of the magazine AmmoLand Shooting Sports News, was conducting legal research on the federal case Second Amendment Foundation v. Ferguson, for which ChatGPT produced a series of fake legal complaints against Walters, accusing him of defrauding and embezzling funds from a nonprofit organization of which he was never an employee. Walters seeks significant monetary damages from OpenAI—one in the latest batch of legal complaints challenging ChatGPT’s overall viability in the wider consumer marketplace.
Walters filed his suit just a few weeks after two New York–based lawyers faced a growing likelihood of sanctions for using ChatGPT to research and draft legal briefs that cited several fake precedents. In early April, Steven Schwartz, who works at the personal injury firm Levidow, Levidow & Oberman P.C., and his team filed claims against Colombian airline Avianca on behalf of their client Roberto Mata, who allegedly sustained injuries on a flight into New York City. The airline asked the presiding federal judge to dismiss all charges. In response, Mata’s lawyers filed a multipage brief insisting that the suit should proceed.
According to court documents, Schwartz, who is not admitted to practice law in the Southern District of New York, enlisted the help of Peter LoDuca, an associate at the firm, to file the charges against Avianca. As expected, Schwartz continued to perform all the legal work and research required for the case, with one major exception. In the early going, Schwartz said he used OpenAI’s generative chatbot to supplement the research he had already done and to help write briefs.
Several of the cases supplied by the program turned out to be fake. According to the website Human Resources Director, several of the quotations and citations were questionable or flat-out inaccurate: “The Clerk of the United States Court of Appeals for the Eleventh Circuit, in response to the Court’s inquiry, confirmed that one of the cases cited by Schwartz—Varghese v. China Southern Airlines Ltd.—is nonexistent.” Five additional cited cases also were fake; U.S. District Judge Kevin Castel witheringly called them “bogus judicial decisions with bogus quotes and bogus internal citations.”
Schwartz has apologized profusely for supplementing his research in this manner, noting the rapidly growing use of AI in law firms. He hadn’t used ChatGPT previously, he said, and did not realize that it could falsify information in this manner.
The Long-Term Legal Ramifications
Walters’s lawsuit could be the catalyst for many cases that will examine the legal liabilities related to chatbot use. Experts say that despite the glaring problems Walters lays out, his case also has deficiencies and will likely work its way through court for a long time.
“In principle, I think libel lawsuits against OpenAI might be viable. In practice, I think this lawsuit is unlikely to succeed,” Eugene Volokh, a First Amendment law professor at UCLA, told Bloomberg Law. “I suppose the claim might be, ‘You knew that your program was outputting falsehoods generally, and you were reckless about it.’ My sense of the case law is that it needs to be knowledge or recklessness as to the falsity of a particular statement.”
The tale of Schwartz’s botched research certainly offers lessons for anyone using ChatGPT for research. Regardless of the technological advances, regardless of the awe and wonder AI often occasions, there is simply no substitute for humans producing and fact-checking legal work. Practicing attorneys must abide by professional conduct rules. Drafted by the American Bar Association, the Model Rules of Professional Conduct include a variety of provisions on technological competence, which might now also begin to consider the impact of AI programs on ongoing litigation.
Drew Simshaw, a law professor at Gonzaga University, told Bloomberg Law that Schwartz’s case stands out as an extreme example of the legal industry’s current reliance on AI programs. “From a regulatory standpoint, there’s a lot of questions out there,” he said. “The legal profession has lagged behind in regulating technology. States usually adopt ethics rules closely, modeling the ABA’s Model Rules of Professional Conduct. Those really haven’t been updated since 2012, when ‘cloud computing’ was the new buzzword.”
Yes, AI can generate slick-sounding content quickly. But using it in a legal marketing environment may well present more difficulties than benefits in an age in which human input is becoming ever rarer.