Insight

Artificial Intelligence Hallucinates Case Law Introduced In a Canadian Cour

The misuse of artificial intelligence (AI) has found its way into Canadian courts

Charles E. Gluckstein CS

Charles E. Gluckstein CS

March 25, 2024 12:48 PM

We knew it was bound to happen. And, now it has. The misuse of artificial intelligence (AI) has found its way into Canadian courts - and it was an officer of the court who was ultimately responsible for bringing it there.

On January 23, 2024, Global News reported that British Columbia lawyers Lorne and Fraser MacLean had discovered fake case law entered into court by opposing counsel for a civil case involving a “high-net-worth family matter, with the best interests of children at stake.”

Lawyer Chong Ke told the court she did not intentionally try to mislead the court by submitting legal briefs with this fake case law; rather, the AI tool (ChatGPT) she allegedly employed to assist in her research had a “hallucination” that prompted it to create realistic sounding fake information.

In this blog post, I’ll outline how this case makes BC ground zero for fake AI cases in Canada, explain why it was human error rather than a glitch in AI software itself that lead to this very concerning situation, and suggest why incidents such as this one reaffirm my belief that we should not shy away from this exciting new technology, but rather regulate it and refine it so that it can help lawyers do their work as opposed to throwing our profession into disrepute.

What Is AI and How Was It (Mis)Used in BC?

AI trains machines through experiential learning to think and act like humans. AI systems adjust to new data inputs to alter and refine their outputs similar to how humans adjust their thinking based upon learning from their lived experience or being taught new information.

Well-known AI systems currently available derive their knowledge from deep learning and natural language processing. For example, ChatGPT is a large model natural language processing chatbot that permits users to engage in a conversation. ChatGPT analyzes prompts and replies to provide context for the discussion. Through experimentation with the tool, users have discovered this program’s versatility makes it suitable for some creative endeavours and for writing and/or correcting computer code.

Despite these exciting possibilities, ChatGPT’s developer OpenAI notes it has some serious limitations. On its website, OpenAI explains: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”

In several high profile cases in the United States, the United Kingdom, and now Canada, the chatbot’s “plausible-sounding but incorrect” answers or “hallucinations” found their way into briefs submitted to courts. Unless ChatGPT is fact-checked, there is potential for this predictive text tool to make an incorrect prediction that could be subsequently presented as fact.

Ke’s statements to the court in response to the discovery of the AI-produced fake case submission suggests her actions were accidental and based on ignorance of the technology rather than evidence of an attempt to intentionally mislead the court.

Nevertheless, whether intentional or not, the damage done to the reputation of the court and our profession are very real - particularly because this case involved the interests of children.

While reasonable people can come to different conclusions about the facts of a matter before a court, it is incumbent on our legal system to ensure evidence heard before a court is real and accurate.

Lawyers, as officers of the court, have a duty to be forthright and truthful; they can be disciplined if they are not being honest with evidence they are presenting. In Ke’s case, the Law Society of British Columbia, which issued a warning and guidance to lawyers about AI use in late 2023, could investigate the matter and take disciplinary action.

Convincing to a Fault.

When ChatGPT is prompted to write a convincing legal brief on a topic, it draws on what it knows about the form and style of legal briefs and publicly accessible data on a topic to generate a response. But what if it can’t identify an existing case to help it make a convincing and persuasive argument? Why not draw on aggregate data to create a case that would help it make its point?

ChatGPT worked according to design; but what it was designed to do is not appropriate for a court of law. Courts weigh the value of verifiable facts to determine a truth that is, depending on the type of case, either beyond reasonable doubt or more likely than not based on the balance of probabilities; ChatGPT’s programming employs “truthiness” to generate text it predicts that a user wants based on context.

Beyond the inherent ethical questions that arise from professionals using AI to create a product without acknowledging the source, the emergence of fake cases in court could sully jurisprudence if judges are not careful to fact check case law presented in briefs. What occurred in this case is an enormous waste of court resources and it’s rightly sending shockwaves across the country.

The widespread availability of AI tools could also have profound effects on other types of evidence introduced in matters before the courts. How will courts respond to submissions in small claims courts and tribunals where individuals may be self-represented and lack the oversight of professional regulatory bodies that will hopefully deter this practice from becoming commonplace? As these tools improve to a point where they can be employed to forge evidence that humans cannot identify as fake, how will our justice system respond?

Thankfully, our governments and institutions are tackling these issues head on by demanding guarantees from AI developers, and developing regulations to protect our society from misuse of this technology. For example, the Federal Court’s Strategic Plan (2020-2025) noted its interest in this emerging field and it has issued interim principles and guidelines and notices in response to developing events.

Many provinces have recently amended their Rules of Civil Procedure in response to the potential use of AI. Ontario, for example, now requires lawyers to certify “the authenticity of every authority” listed in their factums.

Embrace New Technology, Responsibly.

Long-time readers of this blog will know that Gluckstein Lawyers, our team of personal injury lawyers and team members prides itself on being an early adopter of technology, including generative AI. Technological advances have created products that can be transformative in a practice such as ours. The cost savings and increased efficiency we’ve found by employing new tools judiciously has allowed us to free up staff time and redirect it to better serve our clients.

When it comes to integrating AI into our operations, clearly I’m not a Luddite in the way we’ve popularly come to understand the term.

But, perhaps I do share some affinity to the historical Luddites in terms of their actual concerns about using new technology. As Kevin Binfield, editor of Writings of the Luddites, notes in a Smithsonian Magazine article, the Luddites “were totally fine with machines,” but opposed manufacturers using them in “a fraudulent and deceitful manner” to circumvent standard labour practices.

He explains: “They just wanted machines that made high-quality goods, and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”

If AI, for all its potential and limitations, is respected by its users, it can be a net benefit in a variety of sectors, including legal practice. But, if this technology is not well regulated and used responsibly, we run the risk of experiencing more incidents like the BC case.

Artificial intelligence tools such as ChatGPT are groundbreaking technological advancements. Like any disruptive technologies, they have the potential to both harm and help humanity. Ultimately, it is up to humanity to find ways to employ this technology for the benefit of humankind by refining it and regulating it to limit the possibility of unintentional or malicious misuse.

Trending Articles

Presenting The Best Lawyers in Australia™ 2025


by Best Lawyers

Best Lawyers is proud to present The Best Lawyers in Australia for 2025, marking the 17th consecutive year of Best Lawyers awards in Australia.

Australia flag over outline of country

Legal Distinction on Display: 15th Edition of The Best Lawyers in France™


by Best Lawyers

The industry’s best lawyers and firms working in France are revealed in the newly released, comprehensive the 15th Edition of The Best Lawyers in France™.

French flag in front of country's outline

How To Find A Pro Bono Lawyer


by Best Lawyers

Best Lawyers dives into the vital role pro bono lawyers play in ensuring access to justice for all and the transformative impact they have on communities.

Hands joined around a table with phone, paper, pen and glasses

How Palworld Is Testing the Limits of Nintendo’s Legal Power


by Gregory Sirico

Many are calling the new game Palworld “Pokémon GO with guns,” noting the games striking similarities. Experts speculate how Nintendo could take legal action.

Animated figures with guns stand on top of creatures

Announcing The Best Lawyers in New Zealand™ 2025 Awards


by Best Lawyers

Best Lawyers is announcing the 16th edition of The Best Lawyers in New Zealand for 2025, including individual Best Lawyers and "Lawyer of the Year" awards.

New Zealand flag over image of country outline

Announcing the 13th Edition of Best Lawyers Rankings in the United Kingdom


by Best Lawyers

Best Lawyers is proud to announce the newest edition of legal rankings in the United Kingdom, marking the 13th consecutive edition of awards in the country.

British flag in front of country's outline

Announcing The Best Lawyers in Japan™ 2025


by Best Lawyers

For a milestone 15th edition, Best Lawyers is proud to announce The Best Lawyers in Japan.

Japan flag over outline of country

The Best Lawyers in Singapore™ 2025 Edition


by Best Lawyers

For 2025, Best Lawyers presents the most esteemed awards for lawyers and law firms in Singapore.

Singapore flag over outline of country

Announcing the 16th Edition of the Best Lawyers in Germany Rankings


by Best Lawyers

Best Lawyers announces the 16th edition of The Best Lawyers in Germany™, featuring a unique set of rankings that highlights Germany's top legal talent.

German flag in front of country's outline

How Much Is a Lawyer Consultation Fee?


by Best Lawyers

Best Lawyers breaks down the key differences between consultation and retainer fees when hiring an attorney, a crucial first step in the legal process.

Client consulting with lawyer wearing a suit

Celebrating Excellence in Law: 11th Edition of Best Lawyers in Italy™


by Best Lawyers

Best Lawyers announces the 11th edition of The Best Lawyers in Italy™, which features an elite list of awards showcasing Italy's current legal talent.

Italian flag in front of country's outline

Presenting the 2024 Best Lawyers Employment and Workers’ Compensation Legal Guide


by Best Lawyers

The 2024 Best Lawyers Employment and Workers' Compensation Legal Guide provides exclusive access to all Best Lawyers awards in related practice areas. Read below and explore the legal guide.

Illustration of several men and women in shades of orange and teal

Things to Do Before a Car Accident Happens to You


by Ellie Shaffer

In a car accident, certain things are beyond the point of no return, while some are well within an individual's control. Here's how to stay legally prepared.

Car dashcam recording street ahead

Combating Nuclear Verdicts: Empirically Supported Strategies to Deflate the Effects of Anchoring Bias


by Sloan L. Abernathy

Sometimes a verdict can be the difference between amicability and nuclear level developments. But what is anchoring bias and how can strategy combat this?

Lawyer speaking in courtroom with crowd and judge in the foreground

The Push and Pitfalls of New York’s Attempt to Expand Wrongful Death Recovery


by Elizabeth M. Midgley and V. Christopher Potenza

The New York State Legislature recently went about updating certain wrongful death provisions and how they can be carried out in the future. Here's the latest.

Red tape blocking off a section of street

Attacked From All Sides: What Is Happening in the World of Restrictive Covenants?


by Christine Bestor Townsend

One employment lawyer explains how companies can navigate challenges of federal and state governmental scrutiny on restrictive covenant agreements.

Illustration of two men pulling on string with blue door between them