ARTIFICIAL INTELLIGENCE (AI) presents challenging issues for criminal law practitioners. Remember that AI began making widespread headlines in late 2022 with the introduction of ChatGPT, a tool that uses so-called “large language” models to generate text in response to a user query. That platform, though remarkable, scratches just the surface of the possibilities and quandaries of AI.
There has since been a steady increase in platforms that permit the public to use AI to produce not only text but images, audio and video. Clearly, there are societal benefits to certain aspects of AI that we have all seen, including the ability of machine learning to positively affect cancer research and the medical profession’s overall effort to combat disease. But AI also creates countless legal considerations and potential pitfalls, most acutely in criminal law.
The U.S. Department of Justice has been transparent about its plans to use AI techniques to prosecute criminal matters. In this regard, AI can surely enhance efficiency and be a powerful law enforcement tool. Yet there are a number of concerns and risks here as well. The humans who have taught the machines to learn can intentionally or innocently overlook their own biases and ethical shortcomings, producing discriminatory outcomes and inaccuracies. AI can also generate a product with an overall lack of reasoned, coherent policy and without the judgment and independent thought humans provide. Defense attorneys would be wise to monitor the developments described below.
Government’s Approach to AI
AI is emerging in white-collar matters concerning price fixing, market manipulation and fraud. It can be used to create fraudulent images and videos, and any crime that relies on the ability to synthesize vast amounts of data will be affected, both in the way bad actors structure their crimes and the tools law enforcement will use to catch criminals. While some local government agencies are beginning to explore AI, DOJ has recognized that committing crimes and solving them can both be impacted by AI and its uses.
In February 2024, during remarks at Oxford University, U.S. Deputy Attorney General Lisa Monaco addressed openly the evolving concerns with AI and criminal enforcement, highlighting DOJ’s efforts to address AI’s risk to national security and its position on accountability in offenses involving misuse of AI. Monaco expanded on her remarks the next month at the American Bar Association’s 39th National Institute on White Collar Crime, further addressing the rise of AI through existing sentencing guidelines and corporate enforcement programs. In both speeches, she observed that “all new technologies are a double-edged sword—but AI may be the sharpest blade yet. It holds great promise to improve our lives, but great peril when criminals use it to supercharge their illegal activities, including corporate crime.”
Recognizing that “existing laws offer a firm foundation” and noting that DOJ will seek stiffer sentences for offenses made “significantly more dangerous by the misuse of AI,” Monaco also announced “Justice AI,” a series of meetings in which a team of individuals from academia, law enforcement, science and industry will collaborate and prepare for how AI can be used positively while also safeguarding against risks. The team will generate a report by the end of the year to inform President Biden about AI in the criminal justice system.
Perhaps the most significant takeaway from Monaco’s March remarks at the ABA was her reiteration of DOJ’s AI policy and its keen interest in corporate compliance. She emphasized that if Justice is investigating a company, prosecutors will assess its management of its AI-related risks as part of its overall compliance effort and may adjust sentencing guidelines as a result. She added that she “directed the Criminal Division to incorporate assessment of disruptive technology risks—including risks associated with AI—into its guidance on Evaluation of Corporate Compliance Programs.”
Takeaways
Given DOJ’s focus on identifying and prosecuting crimes that rely on AI, criminal defense attorneys should expect to see more AI in federal law enforcement investigations of both individuals and corporations. As the technology continues to develop, criminal trial lawyers must stay abreast of current criminal laws and responsive legislation from Congress regarding it. As one example, Monaco warned that where there is significant misuse of AI in committing a crime, prosecutors will be using sentencing enhancements to seek increased penalties. Of course, this is just the tip of the iceberg.
DOJ will now evaluate a company’s management of AI risks as part of its corporate compliance efforts. Corporations must stay ahead of this emerging area by keeping their compliance procedures updated, implementing protocols to address AI risks and promptly mitigating any problems. Compliance officers would be prudent to seek out everything they can learn about AI in their particular businesses and identify how it could be inappropriately used by rogue insiders or malicious hackers.
As Monaco said, AI is indeed a double-edged sword, and as such, it must be wielded carefully and responsibly. AI has had a nascent, recent and profound impact on society generally and criminal law specifically. This will continue as fraudsters learn how to employ machine learning in their crimes, and as law enforcement gears up to catch these criminals. As the government develops an overall approach to AI, criminal practitioners must monitor DOJ’s statements about it, stay informed about the benefits and pitfalls of the technology, be proactive in testing its boundaries and help clients identify when to test the limits of AI as part of a trial strategy. Stay tuned—it really is the Wild West out there.