Recognizing that generative artificial intelligence (“AI”) is revolutionizing the way we live and work, regulators are introducing new guidelines to ensure that the benefits of AI are leveraged in compliance with existing law. On the federal level, the White House and U.S. Department of Labor (“DOL”) each released a series of new workplace AI guidance to address a variety of issues from compliance with Equal Employment Opportunity (“EEO”) laws to best practices for workplace AI use and everything in between. Not to be outdone, several states and localities have created their own legislation to regulate workplace AI use. This AI roundup discusses what the federal government wants you to know about AI in the workplace, and what you should know about state and local AI developments that may impact you in the workplace.
Federal Guidance
The White House’s Executive Order
First up for discussion is the Executive Order (“EO”) that spawned the recent flurry of federal AI workplace guidance documents highlighted below. In October 2023, President Biden’s landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence addressed several areas of mounting AI-related concerns, including potential inequities that may arise with the use of AI in the employment context. To mitigate risk and maximize benefits of AI, the EO urged several federal agencies to provide collaborative guidance for the responsible development and use of AI. Among the directives issued, the EO called heavily upon the Secretary of Labor to develop principles and best practices governing the use of AI by employers, agencies, and federal contractors to ensure appropriate compensation for workers, fair evaluations of job applications, protection of workers’ rights, and prevention of unlawful discrimination in AI-assisted decision-making processes. Discussed below are the various guidance documents issued by federal agencies to date in response to the EO’s directives.
The DOL’s Wage and Hour Division’s Field Assistance Bulletin
On April 29, 2024, the DOL’s Wage and Hour Division (“WHD”) released a Field Assistance Bulletin (“FAB”) to provide guidance to employers on the compliance risks that arise with the use of workplace AI and other technologies under the Fair Labor Standards Act, Family and Medical Leave Act, Providing Urgent Maternal Protections for Nursing Mothers Act, Employee Polygraph Protection Act of 1988, and anti-retaliation provisions of WHD-enforced laws. The bottom line of the lengthy FAB: Workplace AI and other technologies are not a substitute for responsible human oversight to ensure compliance with the laws. Employers remain on the hook for any violations of law that arise with the use of AI and other technologies in the workplace. For more detail, see our recent eAlert on the FAB HERE.
Guidance from the DOL’s Office of Federal Contract Compliance Programs
On the same day that the FAB was released, another department of the DOL, the Office of Federal Contract Compliance Programs (“OFCCP”), released a document providing guidance to federal contractors regarding the use of workplace AI and automated decision-making tools. The Artificial Intelligence and Equal Employment Opportunity for Federal Contractors guidance addresses compliance risks and obligations in the EEO context. Although the guidelines are directed to federal contractors, all employers using workplace AI can benefit from the guidance’s “promising” practices aimed to mitigate the potentially harmful impacts of automated decision-making systems in the workplace.
According to the guidance, it is best practice for employers to:
- Understand the business needs that motivate the use of the AI system.
- Be informed of the data collected and analyzed by the AI system and how the data is used in the selection process or other employment decisions.
- Analyze the job-relatedness of the AI tool’s selection procedures.
- Conduct and maintain records of routine independent assessments of AI tools for bias or inequitable results.
- Explore less discriminatory alternative selection procedures.
- Provide applicants and employees with advanced notice of any AI-assisted decision making, including disclosure of the data to be collected and how the data is used for evaluation.
- Inform applicants and employees of procedures to request and obtain reasonable accommodation.
- Safeguard information collected and provide options to review, correct, or delete data collected.
- Maintain responsible human oversight regarding employment decisions made or supported by AI.
- Provide appropriate training to those responsible for monitoring and analyzing the AI system.
- When working with vendors, ensure they considered the needs of individuals with disabilities, tested the AI system for disparate or adverse impact on individuals with disabilities, and that the AI system accurately measures a candidate’s skill based on the essential functions of the job.
As further discussed below, some of these practices may already be legal requirements in certain states or localities.
The DOL’s AI Guidance for Developers and Employers
Proving there is no rest for the weary, barely two weeks later on May 16, 2024, the DOL released additional guidance, entitled Artificial Intelligence and Worker Well-being: Principles for Developers and Employers, addressing the responsible development and deployment of AI and automated systems in the workplace. The guidance sets forth eight principles that apply throughout the entire lifecycle of AI, from development and testing to deployment and auditing, and are applicable to all sectors.
The DOL’s AI Principles for Developers and Employers include:
- Centering Worker Empowerment: Keep workers and their representatives informed and involved throughout the lifecycle of AI systems used in the workplace, including design, development, testing, use, and oversight.
- Ethically Developing AI: Design and develop AI systems in a way that protects workers.
- Establishing AI Governance and Human Oversight: Develop and maintain clear governance systems, procedures, human oversight, and evaluation processes for the use of AI systems in the workplace.
- Ensuring Transparency in AI Use: Be transparent with workers and job seekers about AI systems used in the workplace.
- Protecting Labor and Employment Rights: Ensure workplace AI systems do not violate or undermine workers’ rights or protections against discrimination and retaliation.
- Using AI to Enable Workers: Use AI systems to assist, complement, and enable workers, and improve job quality.
- Supporting Workers Impacted by AI: Support or upskill workers during AI-related job transitions.
- Ensuring Responsible Use of Worker Data: Ensure all worker data collected, used, or created by AI systems are limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.
These principles are not intended to be an exhaustive list, but rather a guiding framework for businesses. Accordingly, developers and employers should review and customize the principles based on industry or workplace context and worker input.
State and Local Developments
Despite the federal government’s recent focus on all things AI, it was too little, too late in the eyes of several states and localities that rushed to develop their own laws to govern the use of workplace AI in their respective jurisdictions. Summarized below is a non-exhaustive list of state and local workplace AI legislation – some that are headed your way and some already here to stay.
New York
Back in 2021, New York City (“NYC”) became a frontrunner in the legislative AI race when it enacted Local Law 144, a first-of-its-kind law aimed to regulate the use of automated employment decision tools (“AEDT”) for the screening of applicants and employees. Specifically, effective as of July 5, 2023, employers using AEDTs to screen applicants or employees for a NYC-based job or promotion (and employment agencies located in NYC or filling positions based in NYC) are required to:
- Conduct annual bias audits of the AEDT by an “independent auditor” – the first of which must occur within a year prior to initial use;
- Publish a public summary of the most recent audit results on their website; and
- Provide advance notice to applicants and employees disclosing the use of an AEDT in the evaluation process, the qualities and characteristics to be considered by the AEDT, and instructions to request reasonable accommodations or an alternative selection process under other laws, if available.
Employers in the rest of New York State, stay tuned: State-wide AEDT legislation is currently pending in the New York Assembly, although it has yet to gain traction. In February 2024, Assemblymember Alvarez introduced A9314, which would regulate the use of AEDTs in the screening and hiring process. Under the proposed law, employers would be prohibited from using AEDTs to screen applicants for jobs in the state unless the AEDT was subject to a disparate impact analysis in the past year and annually, thereafter, the results of which must be provided to the New York Department of Labor. An employer is not required to publicly file the disparate impact analysis results; however, prior to the first use of the tool, it must post on its website a summary of the most recent disparate impact analysis and the distribution date of the tool. The bill is currently pending before the Assembly Labor Committee.
New Jersey
Also in February 2024, New Jersey Assemblymembers introduced two bills aimed to regulate the use of AI tools in the hiring process. The first bill, A3854, seeks to regulate the use of AEDTs in hiring to “minimize employment discrimination that may result from the use of the tools.” The bill would prohibit the sale of an AEDT unless (1) the tool was subjected to a bias audit within a year prior to its sale or offer for sale; (2) the sale of the tool includes, at no additional cost, an annual bias audit service that provides results of the audit to the purchaser; and (3) the sale or offer for sale includes a notice that the tool is subject to provisions of the proposed law. Additionally, employers using AEDTs would be required to:
- Publish a public summary of the most recent audit; and
- Provide notice within 30 days to each candidate screened by an AEDT that the tool was used to evaluate their candidacy; the qualifications or characteristics used to assess the candidate; and, if requested by the candidate, the source of data collected and the employer’s data retention policy.
The second bill, A3911, would regulate the use of AI-enabled video interviews in the hiring process and would require employers to do the following “prior to making a request for a video interview”:
- Notify the applicant that AI may be used to analyze the applicant’s fitness for the position;
- Provide the applicant with an explanation of how the AI works and the characteristics that will be used to evaluate the applicant; and
- Obtain written consent from the applicant to be evaluated by the AI program.
The proposed law prohibits employers from sharing the video, except with service providers whose expertise is necessary to evaluate the applicant’s fitness for the position, and requires employers to delete videos, including backup copies, within 30 days upon the request of the applicant. Employers also would be required to “collect and report” the race and ethnicity of applicants who are not offered in-person interviews and those who are offered a position or hired.
Additional States with Workplace AI Laws or Legislation
Employers in other locations should be aware of the following AI laws and legislation that may – or soon may – apply to their workplace:
California: In May 2024, the California Civil Rights Department released proposed regulations for the use of AI and automated decision-making systems regarding applicants and employees. The proposed regulations address several issues including third-party liability, potential discrimination arising from the use of automated systems, and the use of automated systems for background checks and medical or psychological inquiries. Additionally, in February 2024, the California Legislature introduced a bill to prohibit the use of an automated decision tool (“ADTs”) to make “consequential decisions” in a manner that results in “algorithmic discrimination.” The proposed law would require employers to conduct annual impact assessments of ADTs, provide advance notice to applicants and employees that an ADT will be used to make a consequential decision, and establish and maintain a governance program to manage reasonably foreseeable risks of algorithmic discrimination associated with the ADT.
Colorado: Colorado recently became the first state to enact comprehensive legislation to regulate the use and development of workplace AI systems. Effective February 1, 2026, developers and deployers of “high-risk” AI systems are required to use “reasonable care” to protect consumers from known or reasonably foreseeable risks of “algorithmic discrimination.” The law provides a set of obligations for developers and deployers, with which if complied, would create a rebuttable presumption of “reasonable care.”
Illinois: In 2020, Illinois enacted a law to regulate the use of AI-enabled video interviews to screen applicants. Among the requirements, employers must provide advance notice of and obtain prior consent for the use of AI to evaluate video interviews.
Maryland: In 2020, Maryland enacted a law that requires employers to obtain prior written consent from applicants for the use of facial recognition technology during interviews.
Bonus for Attorneys: Update from the New Jersey State Bar Association’s AI Task Force
Back in September 2023, the New Jersey State Bar Association launched its Task Force on Artificial Intelligence and the Law (“AI Task Force”) to study the use, ethics, and impact of AI tools on the legal industry. After months of collaboration between several subcommittees, the AI Task Force released its first report in May 2024. The wide-ranging report, anticipated to be the first in a series of yet-to-be-issued directives, provides a high-level view of the potential effects of AI on New Jersey’s legal industry and offers practical guidance on the responsible adaptation and use of AI tools.
Highlights of the AI Task Force’s findings and recommendations for legal professionals and law practices, include:
- When performing tasks considered “the practice of law,” use only AI tools designed for legal professionals, as opposed to those designed for the public, to avoid potential issues regarding data privacy, ethics, and other concerns.
- When evaluating AI tools and services, identify and document how data is used, transmitted, and stored to ensure confidentiality.
- Adopt an organizational AI policy with a risk assessment framework.
- Stay informed of evolving AI-related technology and understand how to use the technology in light of applicable laws and rules, including the Rules of Professional Conduct.
- When developing or implementing AI systems, collaborate with data privacy experts, cybersecurity professionals, and AI professionals to ensure responsible integration and adherence to ethical and legal standards.
- To enhance data protection, consider shifting the responsibility to protect sensitive information to technology providers.
Additional issues addressed in the report include providing equitable access to AI tools and technology, particularly in rural and underserved areas; monitoring and evaluating AI tools to prevent misuse or unintended consequences, such as those that perpetuate racial bias or other inequities; and establishing CLE requirements for attorneys in technology-related subjects.
We expect to see new and expanded laws and regulations governing workplace AI as the use of these and similar technology continues to increase. For assistance navigating this complex and evolving area of the law, please reach out to the NFC Attorney with whom you typically work or call us at 973.665.9100.