ATTENTION CALIFORNIA EMPLOYERS: PREPARE FOR THE RISE OF AI IN THE WORKPLACE WITH HIGHLIGHTS OF RECENT FEDERAL, STATE AND LOCAL DEVELOPMENTS 

Print Friendly, PDF & Email

Recognizing that generative artificial intelligence (“AI”) is revolutionizing the way we live and work, regulators are introducing new guidelines to ensure that the benefits of AI are leveraged in compliance with existing law. On the federal level, the White House and U.S. Department of Labor (“DOL”) each released a series of new workplace AI guidance to address a variety of issues from compliance with Equal Employment Opportunity (“EEO”) laws to best practices for workplace AI use and everything in between. Not to be outdone, several states and localities have created their own legislation to regulate workplace AI use. This AI roundup discusses what the federal government wants you to know about AI in the workplace, and what you should know about state and local AI developments that may impact you in the workplace. 

Federal Guidance

The White House’s Executive Order

First up for discussion is the Executive Order (“EO”) that spawned the recent flurry of federal AI workplace guidance documents highlighted below. In October 2023, President Biden’s landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence addressed several areas of mounting AI-related concerns, including potential inequities that may arise with the use of AI in the employment context. To mitigate risk and maximize benefits of AI, the EO urged several federal agencies to provide collaborative guidance for the responsible development and use of AI. Among the directives issued, the EO called heavily upon the Secretary of Labor to develop principles and best practices governing the use of AI by employers, agencies, and federal contractors to ensure appropriate compensation for workers, fair evaluations of job applications, protection of workers’ rights, and prevention of unlawful discrimination in AI-assisted decision-making processes. Discussed below are the various guidance documents issued by federal agencies to date in response to the EO’s directives.

The DOL’s Wage and Hour Division’s Field Assistance Bulletin

On April 29, 2024, the DOL’s Wage and Hour Division (“WHD”) released a Field Assistance Bulletin (“FAB”) to provide guidance to employers on the compliance risks that arise with the use of workplace AI and other technologies under the Fair Labor Standards Act, Family and Medical Leave Act, Providing Urgent Maternal Protections for Nursing Mothers Act, Employee Polygraph Protection Act of 1988, and anti-retaliation provisions of WHD-enforced laws. The bottom line of the lengthy FAB: Workplace AI and other technologies are not a substitute for responsible human oversight to ensure compliance with the laws. Employers remain on the hook for any violations of law that arise with the use of AI and other technologies in the workplace. For more detail, see our recent eAlert on the FAB HERE

Guidance from the DOL’s Office of Federal Contract Compliance Programs

On the same day that the FAB was released, another department of the DOL, the Office of Federal Contract Compliance Programs (“OFCCP”), released a document providing guidance to federal contractors regarding the use of workplace AI and automated decision-making tools. The Artificial Intelligence and Equal Employment Opportunity for Federal Contractors guidance addresses compliance risks and obligations in the EEO context. Although the guidelines are directed to federal contractors, all employers using workplace AI can benefit from the guidance’s “promising” practices aimed to mitigate the potentially harmful impacts of automated decision-making systems in the workplace.

According to the guidance, it is best practice for employers to:

  • Understand the business needs that motivate the use of the AI system.
  • Be informed of the data collected and analyzed by the AI system and how the data is used in the selection process or other employment decisions.
  • Analyze the job-relatedness of the AI tool’s selection procedures.
  • Conduct and maintain records of routine independent assessments of AI tools for bias or inequitable results.
  • Explore less discriminatory alternative selection procedures.
  • Provide applicants and employees with advanced notice of any AI-assisted decision making, including disclosure of the data to be collected and how the data is used for evaluation.
  • Inform applicants and employees of procedures to request and obtain reasonable accommodation.
  • Safeguard information collected and provide options to review, correct, or delete data collected.
  • Maintain responsible human oversight regarding employment decisions made or supported by AI.
  • Provide appropriate training to those responsible for monitoring and analyzing the AI system.
  • When working with vendors, ensure they considered the needs of individuals with disabilities, tested the AI system for disparate or adverse impact on individuals with disabilities, and that the AI system accurately measures a candidate’s skill based on the essential functions of the job.

As further discussed below, some of these practices may already be legal requirements in certain states or localities. 

The DOL’s AI Guidance for Developers and Employers

Proving there is no rest for the weary, barely two weeks later on May 16, 2024, the DOL released additional guidance, entitled Artificial Intelligence and Worker Well-being: Principles for Developers and Employers, addressing the responsible development and deployment of AI and automated systems in the workplace. The guidance sets forth eight principles that apply throughout the entire lifecycle of AI, from development and testing to deployment and auditing, and are applicable to all sectors. 

The DOL’s AI Principles for Developers and Employers include:

  • Centering Worker Empowerment: Keep workers and their representatives informed and involved throughout the lifecycle of AI systems used in the workplace, including design, development, testing, use, and oversight.
  • Ethically Developing AI: Design and develop AI systems in a way that protects workers.
  • Establishing AI Governance and Human Oversight: Develop and maintain clear governance systems, procedures, human oversight, and evaluation processes for the use of AI systems in the workplace.
  • Ensuring Transparency in AI Use: Be transparent with workers and job seekers about AI systems used in the workplace.
  • Protecting Labor and Employment Rights: Ensure workplace AI systems do not violate or undermine workers’ rights or protections against discrimination and retaliation.
  • Using AI to Enable Workers: Use AI systems to assist, complement, and enable workers, and improve job quality.
  • Supporting Workers Impacted by AI: Support or upskill workers during AI-related job transitions.
  • Ensuring Responsible Use of Worker Data: Ensure all worker data collected, used, or created by AI systems are limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.

These principles are not intended to be an exhaustive list, but rather a guiding framework for businesses. Accordingly, developers and employers should review and customize the principles based on industry or workplace context and worker input. 

State and Local Developments

Despite the federal government’s recent focus on all things AI, it was too little, too late in the eyes of several states and localities that rushed to develop their own laws to govern the use of workplace AI in their respective jurisdictions. Summarized below is a non-exhaustive list of state and local workplace AI legislation – some that are headed your way and some already here to stay. 

California

In May 2024, the California Civil Rights Department (“CRD”) released proposed regulations for the use of AI and automated decision-making systems regarding applicants and employees. The proposed regulations address several issues including third-party liability, potential discrimination arising from the use of automated systems, and the use of automated systems for background checks and medical or psychological inquiries. Public comments must be submitted by July 18.

Highlights of the proposed regulations include the following:

  • Employers are prohibited from using selection criteria that may result in adverse impact or disparate treatment of individuals or groups of individuals based on protected characteristics, unless the selection criteria are job-related and consistent with business necessity. 
  • Prior to denying an offer based on criminal history, employers must make an individualized assessment of whether the conviction history has a direct and adverse relationship to the specific duties of the job that would justify the applicant’s denial. The proposed regulations clarify that the use of an automated-decision system, alone, does not constitute an individualized assessment. 
  • Employers who withdraw a conditional offer based on criminal history must provide the applicant with a copy or description of the report and assessment criteria used by the automated system.
  • The proposed regulations clarify that medical or psychological inquiries include (1) personality-based questions, such as those meant to measure optimism or emotional stability; and (2) puzzles, games, and other challenges that evaluate physical or mental abilities.
  • The proposed regulations clarify that third parties are liable for the design, development, advertisement, sale, and use of automated systems where the use constitutes unlawful disparate treatment or has an unlawful adverse impact on applicants or employees.

Like the CRD, the California Legislature is seeking to address the use of AI in employment decisions. In February 2024, California introduced AB2930, which would prohibit the use an automated decision tool (“ADT”) to make “consequential decisions” in a manner that results in “algorithmic discrimination.” The proposed law would require employers with 25 or more employees to:

  • Conduct and record impact assessments of ADTs on or before January 1, 2026, and annually, thereafter, the results of which must be maintained for two years;
  • Provide impact analysis statements to address content specified in the legislation, including the ADT’s purpose and description, data collected by the ADT, potential adverse impact analyses based on protected characteristics, safeguards to address reasonably foreseeable risks of algorithmic discrimination, and how the ADT will be used and evaluated for validity or relevance;
  • Provide impact assessments to the CRD within seven days of a request;
  • Provide advance notice to applicants and employees that an ADT will be used to make a consequential decision and a statement of the ADT’s purpose;
  • Accommodate an applicant’s or employee’s request for an alternative selection process or accommodation, if technically feasible, when a decision is based solely on the ADT’s output;
  • Establish, implement, and maintain a governance program to manage reasonably foreseeable risks of algorithmic discrimination associated with the ADT; and
  • Make publicly available a policy that provides the types of ADTs that are used or made available by the employer and how the reasonably foreseeable risks of algorithmic discrimination are managed.

Additional States with Workplace AI Laws or Legislation

Employers in other locations should be aware of the following AI laws and legislation that may soon – or may already – apply to their workplace:

New York: In 2021, New York City enacted Local Law 144 to regulate the use of automated employment decision tools (“AEDT”) for the screening of applicants and employees. Effective as of July 5, 2023, covered employers and employment agencies that use AEDTs in the screening process are required to conduct annual bias audits, publish a public summary of the audit results, and provide advanced notice to applicants and employees regarding the use of an AEDT in the evaluation process. On the state-level, similar AEDT legislation is pending in the New York Assembly, which would prohibit employers from using an AEDT in the screening and hiring process unless the AEDT was subject to a disparate impact analysis and the results are provided to the New York Department of Labor.

New Jersey: In February 2024, New Jersey introduced two bills aimed to regulate the use of AI tools in the hiring process. The first bill would prohibit the sale of an AEDT unless certain conditions and requirements are met and, additionally, would require employers using AEDTs to publish a public summary of the most recent audit results and provide notice to employees that an AEDT was used to evaluate their candidacy. The second bill would target the use of AI-enabled video interviews in the hiring process and would require employers to provide advanced notice of and obtain written consent for the use AI in the evaluation process. 

Colorado: Colorado recently became the first state to enact comprehensive legislation to regulate the use and development of workplace AI systems. Effective February 1, 2026, developers and deployers of “high-risk” AI systems are required to use “reasonable care” to protect consumers from known or reasonably foreseeable risks of “algorithmic discrimination.” The law provides a set of obligations for developers and deployers, with which if complied, would create a rebuttable presumption of “reasonable care.” 

Illinois: In 2020, Illinois enacted a law to regulate the use of AI-enabled video interviews to screen applicants. Among the requirements, employers must provide advance notice of and obtain prior consent for the use of AI to evaluate video interviews. 

Maryland: In 2020, Maryland enacted a law that requires employers to obtain prior written consent from applicants for the use of facial recognition technology during interviews. 


SIGN UP

SIGN UP NOW to receive time sensitive employment law alerts and invitations to complimentary informational webinars and seminars.

"*" indicates required fields

By clicking this button and submitting information to us, you will be submitting certain personally identifiable information, or information which used together with other information, can be used to identify you and/or identify information about you, to Nukk-Freeman & Cerra, PC (“NFC”). Such information may be used by NFC to contact or identify you. Personally identifiable information may include, but is not limited to, your [name, phone number, address and/or] email address. We collect this information for the purpose of providing services, identifying and communicating with you, responding to your requests/inquiries, and improving our services. We may use your personally identifiable Information to contact you with time sensitive employment law e-alerts, marketing or promotional offers, invitations to complimentary and informational webinars and seminars, and other information that may be of interest to you. However, by providing any of the foregoing information to you, we are not creating an attorney-client relationship between you and NFC: nor are we providing legal advice to you. You may opt out of receiving any, or all, of these communications from us by following the unsubscribe link in any email we send. However, this will not unsubscribe you from receiving future communications from us which are based upon an independent request, relationship or act by you.