Earlier this year, the California Civil Rights Council secured final approval for regulations governing the use of artificial intelligence (AI) and automated-decisions systems (ADS) in employment. The California Civil Rights Department explains that while AI tools bring a myriad of benefits, they can also exacerbate existing biases and contribute to discriminatory outcomes. The regulations seek to address these issues by clarifying the application of the Fair Employment and Housing Act’s (FEHA) anti-discrimination laws to the use of AI and ADS in employment decisions.
Final Regulations: What’s New?
The regulations took effect on October 1, 2025 and apply to all employers covered under FEHA. Following up on the regulations as proposed (see HERE), the final regulations:
- Make clear that ADS use may violate California law if it harms applicants or employees based on protected characteristics, such as gender, race, or disability.
- Ensure employers and covered entities maintain employment records and automated decision data for a minimum of four years.
- Affirm that ADS assessments, including tests, questions, or puzzle games that elicit information related to a disability may constitute an unlawful medical inquiry.
- Add definitions for key terms, such as “automated-decisions systems,” “agent,” and “proxy.”
Key Definitions
Employer:“Any person or individual engaged in any business or enterprise regularly employing five or more individuals, including individuals performing any service under any appointment, contract of hire or apprenticeship, express or implied, oral or written.” The regulations clarify that an “agent” is also an employer.
Agent:“[A]ny person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity, which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an [ADS].”
Proxy:“A characteristic or category closely correlated with a basis protected by [FEHA].”
Automated-Decisions Systems:“A computational process that makes a decision or facilitates human decision making regarding an employment benefit” and “may be derived from and/or use [AI], machine-learning, algorithms, statistics, and/or other data processing techniques.” ADS may be used to perform tasks, including:
- Using computer-based assessments or tests to make predictive assessments; measure skills, dexterity, reaction-time, and/or other abilities or characteristics; measure personality traits, aptitude, attitude, and/or cultural fit; or screen, evaluate, categorize, and/or make recommendations
- Directing job advertisements or other recruiting materials to targeted groups
- Screening resumes for particular terms or patterns
- Analyzing facial expression, word choice, and/or voice in online interviews
- Analyzing employee or applicant data acquired from third parties
Prohibited ADS-Driven Discrimination
While the regulations do not prohibit the use of ADS in the employment process, they prohibit the use of ADS or selection criteria (e.g., qualification standard, employment test, or proxy) in a manner that intentionally or unintentionally discriminates against, or has an adverse impact on, applicants or employees based on categories protected under FEHA. This includes the use of facially neutral ADS selection tools that may have a disparate impact on applicants or employees, whether or not intentional, unless the practice is job-related and consistent with business necessity.
Examples of Selection Criteria That May Result in Disparate Impact
- An ADS that measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other protected characteristics.
- An ADS that analyzes an individual’s tone of voice, facial expressions, or other physical characteristics or behavior may discriminate based on race, national origin, gender, disability, or other protected characteristics.
- An ADS that inquires as to an individual’s availability to work weekends or evenings may discriminate based on religious creed.
Reasonable Accommodations
The regulations make clear that – to avoid unlawful discrimination – employers may need to provide reasonable accommodation for ADS use, particularly for requests related to religious creed or disability. For online employment applications, employers should include a mechanism for the applicant to request an accommodation.
Unlawful Medical Inquiries
The regulations affirm that a “medical or psychological examination” or a disability-related inquiry may include an ADS-facilitated test, question, puzzle, game, or other challenge that is likely to elicit information about a disability.
Third-Party Liability: Includes “Agents”
By including “agent” within the definition of “employer,” the regulations extend liability for ADS-driven discrimination to anyone acting directly or indirectly on behalf of the employer to perform tasks traditionally exercised by the employer or any other FEHA-regulated activity. This may include recruitment, screening, hiring, promotion, or decisions regarding pay, benefits, or leave.
Recordkeeping: Extended to Four Years
Employers must retain for four years all ADS data and selection criteria created or received by the employer (or other covered entity) relating to any employment practice and affecting any employment benefit of any applicant or employee. This includes any data used in or resulting from ADS use, and/or any data used to develop or customize an ADS for use.
Affirmative Defense: Anti-Bias Testing
The regulations provide that employers may defend claims of ADS-driven discrimination by demonstrating evidence “of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.”
Employer Takeaways
While the regulations took effect on October 1, 2025, there are several steps you can take to ensure compliance now and mitigate risk looking forward, including the following:
- Identify and understand the ADS used in employment decisions (e.g., data collection, use, and analysis)
- Conduct bias audits and impact assessments before deploying ADS
- Implement a protocol for regular bias audits and impact assessments – document tests, results, and responses
- Amend vendor contracts to require anti-bias audits, transparency, and cooperation with audits
- Establish an AI governance procedure to include risk management, bias and fairness, transparency, oversight, and training
- Develop and implement clear procedures to request accommodations for ADS use
- Revise ADS-facilitated assessments to exclude potential unlawful medical inquiries
- Ensure that selection criteria is job-related and consistent with business necessity
- Update data retention policies to ensure ADS data is retained for four years
- Train appropriate personnel on lawful use and legal limitations of ADS use
- Ensure appropriate human oversight of any ADS-assisted decisions
If you have any questions related to the final regulations or need assistance reviewing AI/ADS policies and procedures, please reach out to the NFC Attorney with whom you typically work or call us at 619.292.0515.