Employers are increasingly using artificial intelligence tools, including “generative AI,” to automate routine tasks, increase efficiency and improve the quality of goods and services. However, the prevalence of AI tools has sparked growing concerns about their unknown potential effects in countless industries, which in turn has led to closer scrutiny from state, federal and global regulators. According to Stanford’s AI Index Report 2023: Measuring Trends in Artificial Intelligence, an analysis of the legislative records of 127 countries shows that “mentions of AI in global legislative proceedings” have increased dramatically since 2016.
What Is Generative AI?
AI is generally defined as the science of using machines to perform tasks that mimic or simulate human intelligence, such as problem solving and decision making. AI can be applied to many things: analyzing large amounts of data to make predictions, processing natural language to communicate with humans, recognizing images to identify and categorize them according to their content—and much, much more.
Significant advances have been made with generative AI, which refers to technology capable of creating new content: images, music, text or other creative work. According to the AI Index, generative AI truly emerged to captivate the public in 2022 with the release of text-to-image models, text-to-video systems and chatbots. AI Index warns, however, that “these systems can be prone to hallucination, confidently outputting incoherent or untrue responses, making it hard to rely on them for critical applications.”
AI’s Impact on the Workplace
Unsurprisingly, the emergence of AI systems is rapidly changing the nature of work. Studies suggest that more than half of Fortune 500 companies are already using AI in talent acquisition to assess and review applicants, gauge a candidate’s potential fit for a role and provide self-service tools to answer employee-relations questions. According to the AI Index, AI tools “are tangibly helping workers,” and “the demand for AI-related professional skills is increasing across virtually every American industrial sector.”
For example, AI systems are automating complex tasks such as drafting documents and providing customer service without the need for human intervention. Such automation can displace jobs even for those in traditional professions—lawyers, teachers, doctors, nurses—and upend the workplace more broadly. The technology will likely continue to be integrated into offices and become an increasingly valuable tool for workers.
Some employers are wrestling with whether to allow staffers to use openly available generative AI tools, such as various cloud-based chatbot programs. Some have chosen to prohibit them until concerns about accuracy and security can be assuaged, while others are encouraging their workers to use such tools as part of their job as long as they don’t put the company’s confidential or proprietary information at risk. Restricting the use of such tools, though, may become only more challenging as the technology evolves and becomes a greater part of everyday life.
Given generative AI’s rapid acceleration, government regulators have expressed concern that such systems can reflect existing biases or otherwise contain their own biases. Regulators have identified that AI systems may result in disparate treatment of protected classes—gender-based discrimination, for example—particularly when used to make hiring or promotion decisions. Relatedly, data privacy and security are a concern given that AI systems collect and incorporate enormous amounts of data. This can include proprietary information or trade secrets, or workers’ and customers’ personal information and biometric identifiers—notably facial scans, which could trigger compliance problems.
The Regulatory Response
Several states and local jurisdictions have begun to erect guardrails around the use of AI and automated decision-making technology in hiring and promotions. New York City has enacted a law that requires AI tools used in connection with certain employment decisions to be first subjected to a “bias audit” that assesses the tools’ potential “disparate impact” on protected categories—an audit whose results must be divulged publicly. The law also requires employers to notify candidates or employees who reside in the city that such tools are being used, and which qualifications and characteristics they assess. Violations will be subject to a civil penalty.
Unsurprisingly, the emergence of AI systems is rapidly changing the nature of work."
In California, regulators have considered Automated Decision Tool (ADT) strictures that would subject employers to potential liability under the state’s antidiscrimination laws. Additionally, this April, Golden State legislators proposed ADT audit rules similar to those in New York City. According to proponents of the proposed legislation (AB331), “[a]s decision making via algorithm becomes more prevalent in our daily lives, it is crucial that we ensure that it is used ethically and responsibly. . . . Without quick, thoughtful regulation, we face a future where decision making is heavily biased without any protections from the devastating impacts.”
AB331 would require developers and users of ADTs to conduct and record an impact assessment including the intended use, the makeup of the data and the rigor of the statistical analysis. The data reported would include an analysis of the potential adverse impact based on race, color, ethnicity, sex, religion, age, national origin or any other classification protected by state law. Opponents of the bill, including the California Chamber of Commerce, argue that “overregulation in this space can easily undermine many beneficial uses of ADT—including the ability to develop and deploy these tools in a manner that can in fact reduce the instances and effects of human bias.” They urge lawmakers to employ “greater clarity, precision and narrowing of the bill to avoid unintended consequences.”
At the federal level, the Biden administration also desires stronger measures to regulate the use of AI. In April, the Commerce Department issued a request for comment on “accountability measures and policies” to seek “assurance” that “AI systems are legal, effective, ethical, safe and otherwise trustworthy.” It added that “advancing trustworthy artificial intelligence is an important federal objective. The National AI Initiative Act of 2022 established federal priorities for AI, creating the National AI Initiative Office to coordinate federal efforts to advance trustworthy AI applications, research and U.S. leadership in the development and use of trustworthy AI in the public and private sectors.”
In October 2022, the White House signaled its concerns in its “Blueprint for an AI Bill of Rights,” which outlined several nonbinding recommendations for the design, use and deployment of AI technology to protect individual rights focused on safety, discrimination protections, data privacy, notice and the consideration of human alternatives. The White House also issued “From Principles to Practice,” which it described as “a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process.” It added that “these principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities or access to critical needs.”
The U.S. Equal Employment Opportunity Commission and the Department of Justice have further warned employers that the use of AI and automated decision-making tools in hiring could result in unlawful discrimination against applicants and employees with disabilities. According to the EEOC, “[e]mployers now have a wide variety of computer-based tools available to assist them in hiring workers, monitoring worker performance, determining pay or promotions and establishing the terms and conditions of employment,” adding that “[e]mployers may utilize these tools in an attempt to save time and effort, increase objectivity or decrease bias. However, the use of these tools may disadvantage job applicants and employees with disabilities.” To address that, the EEOC issued technical assistance in the form of questions and answers that “explain how employers’ use of software that relies on algorithmic decision making may implicate existing requirements under the Americans with Disabilities Act.” It also provides practical tips to employers on how to comply with the ADA.
The Road Ahead
As businesses increasingly rely on AI technology, legislators and regulators are expected to continue to scrutinize its use. It will be important for companies and legal professionals to stay up to date on advances and new applications of such technologies.
In the meantime, employers may want to review the extent to which they already rely on AI systems and how they apply them. In addition, they ought to examine whether their policies prohibit or restrict the unauthorized use of third-party AI tools by workers to do their jobs.
Jennifer G. Betts is the Office Managing Shareholder of Ogletree Deakins’ Pittsburgh office. She is also the co-chair of the firm’s national Technology Practice Group. Jenn has been representing employers in all areas of labor and employment law for more than 15 years, including discrimination, harassment, whistleblower, retaliation, class and collective actions, non-competition and non-disclosure covenants, union campaigns, collective bargaining, and unfair labor practices.
Danielle Ochs is a shareholder in the San Francisco office of Ogletree Deakins. She serves as co-chair of the firm’s Technology Practice Group and focuses on matters arising in the TECHPLACETM. She has more than 25 years of experience representing employers in matters involving employment issues and employment-based technology, trade secrets, and unfair competition in federal and state trial and appellate courts, arbitration, and administrative agencies.