ARTIFICIAL INTELLIGENCE: What was once only science fiction has become reality. The term “chatbot” has entered our everyday lexicon, and our news feeds are filled with articles that trumpet the benefits of AI while also warning of the dangers of this brave new world. “Generative AI”—of which the prominent ChatGPT is only one example—can create text that mirrors human-authored content in a matter of seconds; simply enter a prompt and the program responds with a ready-made document.
It is universally agreed that these new generative AI technologies present a societal watershed by propelling fundamental change across a large swath of industries, including the practice of law. The pandemic forced legal professionals to respond to an unprecedented situation by untethering from traditional models and embracing technology as never before. After surmounting long-held aversions to embracing change, law firms are now eyeing generative AI as a surefire way to increase efficiency, including by streamlining document preparation. Tasks such as drafting and interpreting contracts, preparing and analyzing discovery, and creating legal arguments for briefing will all be affected by the use of generative AI.
However, although we are in the infancy of integrating this technology into everyday legal practice, loud warning bells are already sounding. Headline-grabbing cases have offered sobering warnings that generative AI must be used with great caution, particularly in researching and drafting court submissions: Numerous instances of “hallucination” have occurred—the term of art that describes when the generative AI creates false information that is confidently presented as real and accurate. In fact, OpenAI, the creator of ChatGPT, candidly acknowledges on its website that the chatbot “sometimes writes plausible-sounding but incorrect or nonsensical answers.”
Accordingly, wholesale reliance on generative AI to perform legal research and writing absent the critical analysis, professional judgment and strategic considerations gleaned by attorneys through years of education and experience can give rise to a host of serious ethical issues, including leaving both practitioners and their firms open to sanctions pursuant to Rule 11 of the Federal Rules of Civil Procedure, as well as to potential claims of legal malpractice from clients adversely affected by the use of this technology.
One recent case offers a textbook example of the perils that can arise through the use of generative AI in legal research and writing. This June, Manhattan U.S. District Judge P. Kevin Castel ordered lawyers and their firm to pay a $5,000 sanction under Rule 11 for filing a legal brief that relied upon six fictitious case citations. The lawyers in Mata v. Avianca, Inc. had used ChatGPT as their sole research source for federal legal authority to support their client’s personal injury case against Colombian airline Avianca. After the chatbot generated these half-dozen fabricated cases—which, in turn, included nonexistent internal citations and quotations—counsel crafted their arguments around these false opinions and signed the documents “under penalty of perjury that the foregoing is true and correct.”
When Avianca’s lawyers were unable to locate the cases cited in the brief, they alerted the court, which conducted its own search and was also unsuccessful in finding the rulings. After counsel was ordered to produce copies of the cited decisions, they—in the words of the court—“doubled down and did not begin to dribble out the truth until [weeks later], after the Court issued an Order to Show Cause” why sanctions should not be imposed.
Ultimately, the full story came out: The attorney who drafted the document was unfamiliar with the relevant area of federal law, and, after hearing about ChatGPT, decided to use that technology as the only resource to find case law to cite in his legal argument. Counsel at first posed broad-based queries to the chatbot, then narrowed his questions to focus on more specific points. In response, the AI made up fictitious rulings to support the client’s position, although the court reported that it generated only “summaries or excerpts but not full ‘opinions.’” According to the court, these purported excerpts included analytical “gibberish” and discussions “border[ing] on nonsensical” that alone should have raised red flags had counsel taken the time to read and digest the material. Although counsel later averred that “he could not fathom that ChatGPT could produce multiple fictitious cases” and that when asked, the chatbot had “confirmed” that the authority was real “and available on Westlaw and LexisNexis,” the court held that the totality of circumstances supported sanctions on counsel as well as the law firm.
Numerous instances of 'hallucination' have occurred—the term of art that describes when the generative AI creates false information that is confidently presented as real and accurate."
Federal Rule of Civil Procedure 11(b)(2) states that by presenting a document to the court, an attorney “certifies that to the best of the person’s knowledge, information and belief, formed after an inquiry reasonable under the circumstances” that “the claims, defenses and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying or reversing existing law or for establishing new law.” The court held that although “[t]echnological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Rule 11 “impose[s] a gatekeeping role on attorneys to ensure the accuracy of their filings.” Here, counsel “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”
The court also tallied the “[m]any harms” that flowed from the citation of false authority, including the wasting of resources by both the opposing party and the court to address and rectify the issue; the detriment to the client, who was “deprived of arguments based on authentic judicial precedent”; the damage to the reputation of the judges and courts who were named as authors of the false decisions; the promoting of “cynicism about the legal profession and the American judicial system”; and the danger that “a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”
Concluding that counsel operated in “bad faith” due to their “conscious avoidance” and the making of “false and misleading statements,” the court then imposed the $5,000 fine.
The cautionary tale of Mata underscores that in researching and drafting legal documents, the use of technology must always comport with the highest ethical standards of our profession. In the practice of law, trust is paramount, credibility is king and reputation is everything. Lawyers have the duty to represent clients competently and zealously by skillfully using their legal knowledge and expertise. Lawyers also have a duty of candor to the court to accurately present the applicable law and the facts of the case. Accordingly, attorneys’ ethical obligations call for them to use their analytical and critical thinking, contextual understanding and professional judgment when performing legal research and drafting documents. As the Mata decision made clear, these fundamental responsibilities cannot be “abandoned” to a nascent technology.
In sum, AI has already begun to create a paradigm shift in the legal profession. Undoubtedly, the technology has the beneficial potential to make aspects of practice more efficient, and it will lead attorneys and firms to both innovate and adapt. However, as seen in Mata, using it requires vigilance to ensure the accuracy of its results. Although generative AI may eventually be one tool in the legal research and writing kit, it cannot replace the unique human touch of interpreting legal precedent and principles, formulating persuasive arguments and devising a winning strategy.
Michele M. Jochner is a partner at Schiller DuCanto & Fleck LLP in Chicago, an internationally renowned matrimonial law firm, where she handles high-asset, complex appellate matters, as well as critical trial pleadings requiring sophisticated analysis, advocacy and drafting. A former law clerk to two Chief Justices of the Illinois Supreme Court, a sought-after speaker and a recognized thought-leader who has penned more than 200 articles, she has been honored as one of the “Top 50 Most Influential Women in Law” by the Chicago Daily Law Bulletin and has been recognized by The Best Lawyers in America® since 2015 in Family Law. She has also held leadership positions in a number of organizations, including the Illinois State Bar Association and the Chicago Bar Association, and is the immediate past-Chair of the Minimum Continuing Legal Education Board of the Supreme Court of Illinois.