Technology is evolving rapidly, and we have had to evolve with it. The days of sitting in a law firm library with a stack of books open to the relevant cases, a stack of Shepard’s pamphlets to update research with, and an afternoon to spend thinking about an issue are long gone.
But nostalgia for the past ignores the efficiency and economy that technology has made possible. There are costs and benefits. Indeed, Rule of Professional Conduct 1.1 requires us to be current with technology as a component of competency. See RPC 1.1, Comment 8 (“A lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”). The challenge is to embrace the efficiencies of instant messaging without succumbing to the temptation to become an “instant lawyer!”
In particular, the use of artificial intelligence (AI) presents both peril and promise. By reducing the time-intensive labor inherent in litigation, AI could enable lawyers to handle lawsuits more economically, easing the access to both civil and criminal justice. But there are risks. For example, the algorithms used in AI may include biases that are then amplified, leading to skewed results. Client confidences could be at risk. AI is also subject to mistake and “hallucinations.” We have already seen cases of lawyers submitting briefs citing cases that don’t exist. See, here and here.
In response to these threats, the Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility, along with the Philadelphia Bar Association Professional Guidance Committee, published Joint Formal Opinion No. 2024-200, titled “Ethical Issues Regarding the Use of Artificial Intelligence,” Pennsylvania’s opinion. This opinion, which was well summarized by Daniel Siegel in his column published in June, is one of many, but it is also one of the most comprehensive. Now the American Bar Association, Standing Committee on Ethics and Professional Responsibility has issued a similarly thorough opinion on the issue, Formal Opinion 512, titled “Generative Artificial Intelligence Tools.”
While bar association ethics opinions are not “binding precedent” they are often relied on by courts and regulators, as authoritative sources for interpreting the Rules of Professional Conduct. Formal Opinion 512 does an excellent job of analyzing the rules that are implicated by the use of generative AI (GAI). The opinion wisely cautions that as the technology continues to develop, the analysis of its ethical implications will also need to change. (Here the opinion acknowledges the chief problem with the growth of technology that we experience daily: our use of different technologies has outpaced our ability to understand, anticipate and manage its consequences.)
Formal Opinion 512 defines generative AI (GAI) as a technology that can create new content based on its review of large amounts of text from the internet or other sources. Certain GAI tools, including those used in due diligence or document review, are “self-learning”—meaning they will learn to produce more targeted and relevant results as they review more data. These tools can be used to evaluate huge volumes of documents, create timelines, do legal research and even draft legal documents.
The opinion analyzes an attorney’s use of GAI through the prism of six separate Rules of Professional Responsibility: RPC 1.1, the duty of competency, RPC 1.6, the duty of confidentiality, RPC 1.4, the duty of communication, RPC 3.1, 3.3, and 8.4(c) the duty of presenting meritorious claims and candor to the tribunal, RPC 5.1 and 5.3, the duty to supervise and RPC 1.5, the duty to charge reasonable fees. In analyzing the application of each of these rules to GAI, the opinion also provides a thorough primer on the importance and meaning of each of these rules and, for that reason alone, is worth the read.
Here are some of the takeaways on the opinions analysis of each rule:
Rule 1.1 Competency
As stated above, legal competency requires understanding of applicable technology. The opinion makes clear that lawyers do not need to be “experts” in GAI, but “must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use.” See Opinion 512 at 4. Also, because GAI makes mistakes, to comply with RPC 1.1, a lawyer must provide an “appropriate degree of independent verification … of its output.” Ultimately, whether a lawyer is relying on GAI, an associate or a contract attorney, “the lawyer is fully responsible for the work on behalf of the client.”
Rule 1.6: Confidentiality
The obligation of client confidentiality is far broader than the attorney-client privilege. With limited exceptions, a “lawyer shall not reveal information relating to representation of a client unless the client gives informed consent, except for disclosures that are impliedly authorized in order to carry out the representation.” RPC 1.6 (a). The risk Opinion 512 identifies is that, once confidential client information is provided to a “self-learning” GAI tool, that tool may use that information in other cases, either outside the firm or within it. Therefore, the lawyer must get the client’s informed consent BEFORE providing a GAI tool with information “related” to a representation.
Rule 1.4: Communication
Opinion 512 notes that even where a client’s informed consent to the use of a GAI tool is not required, RPC 1.4’s duty of communication may require that a lawyer tell a client when a GAI tool is being used. Some clients, in their outside counsel guidelines, are now requiring that a firm disclose their use of GAI. A best practice is to raise this issue at the outset of the representation, to ensure that you and the client agree on the use of new technology. This agreement should also be memorialized in your fee agreement.
Rules 3.1, 3.3, 8.4(c): Meritorious Claims and Candor to the Court
All lawyers are familiar with their obligations to bring meritorious claims and their duty of candor to the court. Consistent with their duty of competency, lawyers must ensure that their GAI based research is correct. We are all on notice now that “hallucinations” happen. Whatever tools are used, a lawyer is obligated to ensure that their research and theories are accurate and supported.
Rules 5.1 and 5.3: Supervisory Responsibilities
The duty to supervise subordinate lawyers and nonlawyers applies to their use of GAI tools. As in every aspect of practice, clear policies and training are required “reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that all lawyers in the firm conform to the Rules of Professional Conduct.” A reasonable GAI policy should include, at a minimum, that GAI research must be verified, and that no client information be provided to the tool without client consent. The opinion suggests that training “could include the basics of GAI technology, the capabilities and limitations of the tools, ethical issues in the use of GAI and best practices for secure data handling, privacy, and confidentiality.”
As to supervising the use of GAI by outside vendors, the opinion advises, inter alia, that the lawyer ensure that the GAI tool is designed to protect confidentiality and security, whether the vendor will advise of breaches, and what use the vendor will make of the lawyer’s information after the conclusion of the engagement.
Rule 1.5: Fees
The opinion also discusses how the use of the GAI tool should be billed. Is it an overhead expense that is subsumed in the hourly rate or is it a stand-alone expense that is billed as some firms bill legal research costs through Lexis or Westlaw? Depending on the circumstances, if the lawyer wishes to charge for the use of the tool, the charge must be reasonably based on direct cost and a “reasonable allocation of overhead expenses directly associated with the provision of the service.” We know that Rule 1.5 prohibits “excessive fees.” However, because the rule gives limited guidance on how “excessiveness” is measured, clarity on this issue in the engagement letter is a best practice.
As with all technologies, GAI has the potential to be a bane or a blessing. The “deepfakes” and other abuses are truly frightening. However, the use of GAI as a tool to attack the crisis of access to justice by making litigation affordable and, thus, available to so many, is a real possibility. We know this—there is no going backwards. Opinion 512 is an important and helpful tool for lawyers as they enter this brave new world of artificial intelligence.
Ellen C. Brotman, of BrotmanLaw in Philadelphia, represents individuals before licensing boards, providing effective, caring and efficient assistance. She has served as an assistant federal defender in Philadelphia and practiced in small, medium and large firms with a focus on criminal defense, appellate advocacy, professional responsibility and ethics.
Reprinted with permission from the August 7, 2024 issue of The Legal Intelligencer. © 2024 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.