ATTENTION NEW JERSEY EMPLOYERS: NJ DIVISION ON CIVIL RIGHTS SERVES UP NEW GUIDANCE ON RISKS OF RELYING ON ALGORITHMS TO MAKE EMPLOYMENT DECISIONS UNDER NJ LAW AGAINST DISCRIMINATION

If you’re still digesting the last heaping of federal and state guidance on the use of artificial intelligence in the workplace (see HERE and HERE), it’s time to clear your plate for more. The New Jersey Office of the Attorney General and the Division on Civil Rights recently issued guidance to address the extensive use of automated decision-making tools (ADMTs) and to clarify the application of the New Jersey Law Against Discrimination (LAD) to algorithmic discrimination (AD) resulting from the use of ADMTs. 

While the guidance does not impose new requirements or establish additional rights beyond existing law, it makes clear that the LAD applies to AD in the same way it applies to other discriminatory conduct. It further emphasizes that any covered entity that engages in AD may be held liable for violating the LAD, even if the entity did not intend to discriminate or AD arises from the use of a tool the entity did not develop. Appetite whetted but not ready to devour a full serving of the 13-page guidance? Nibble away at these bite-sized highlights of the document. 

Automated Decision-Making Tools (ADMTs)

The guidance refers to ADMTs as any technological tool, including software tools, systems, or processes used to automate all or part of the human decision-making process. These tools may incorporate technology – such as generative artificial intelligence, machine-learning models, traditional statistical tools, and decision trees – which use algorithms that analyze data to make predictions, recommendations, or generate new data. In the employment context, ADMTs are frequently used to screen resumes and assist in decisions regarding hiring, promotions, demotions, and terminations. While ADMTs may streamline processes, the guidance warns that the tools have the potential to result in AD by creating classes of individuals who will either be advantaged or disadvantaged in ways that can exclude or burden them based on protected characteristics.  

How ADMTs Can Lead to Discriminatory Outcomes

The guidance explains that AD often stems from biases introduced into ADMTs through design, training, or deployment. As it relates to design, the guidance informs that a developer’s choices when designing an ADMT can skew the tool, whether such choices are purposeful or inadvertent. With respect to training, the guidance warns that ADMTs can become biased if the training data is skewed or unrepresentative, lacks diversity or context, reflects historical bias, or contains errors. In deployment, the guidance advises that AD can occur if the ADMT is used to purposely discriminate or is used to make decisions it was not designed to assess. In such cases, its deployment may amplify any internal bias or external systemic inequities by reintroducing biased decisions for continuous training. 

The guidance further details that AD can occur where the use of ADMT results in disparate treatment or disparate impact based on a protected characteristic, or the tool precludes or impedes the provision of reasonable accommodations or modifications to policies, procedures, or physical structures to ensure accessibility. To illustrate these concepts, the guidance provides the following explanations and examples summarized below:

Disparate Treatment: Covered entities engage in disparate treatment discrimination when they use ADMTs to treat members of a protected class differently or assess members of a protected class selectively. 

  • For example, disparate treatment discrimination may occur where an ADMT makes recommendations based on a close proxy for a protected characteristic, such as selecting applicants who provide an individual taxpayer identification number instead of a Social Security number. 

Disparate Impact: Covered entities engage in disparate impact discrimination when an ADMT makes recommendations or decisions that disproportionately harm members of a protected class unless the use of the tool serves a substantial, legitimate, nondiscriminatory interest and there is no less discriminatory alternative. 

  • For example, disparate impact discrimination may arise where a company assesses contract bids through an ADMT that disproportionately screens out female-owned businesses, or a store bans former shoplifters by using facial recognition technology that disproportionately generates false positives for patrons who wear religious headwear. 

Reasonable Accommodations: Covered entities may violate the LAD by using ADMTs that are inaccessible to individuals based on a protected characteristic, or by relying on recommendations made by ADMTs that fail to recognize an accommodation is needed or penalize individuals who need a reasonable accommodation. 

  • For example, a covered entity may violate the LAD by relying on an ADMT’s recommendations to discipline an employee for taking breaks when the employee is allowed additional break time as an accommodation, or by using an ADMT used to measure typing speed that does not fairly measure the typing speed of an applicant who uses a non-traditional keyboard because of a disability. 

Liability for Algorithmic Discrimination Under the LAD

Although the guidance acknowledges that it is often difficult to understand the functions of an ADMT or detect reasons contributing to discriminatory outcomes, it makes clear that covered entities are liable for any policies or practices that result in discrimination in violation of the LAD. The guidance further warns that entities are not immune from liability for AD resulting from the use of an ADMT simply because the tool was developed by a third party or because the entity lacked knowledge on the inner workings of the tool. Accordingly, the guidance advises entities to carefully consider and evaluate the design and testing of ADMTs before the tools are deployed. 

Employer Takeaways

While the guidance does not advise against the use of ADMTs, it makes clear that the tools must be used responsibly to decrease the risk of potential discrimination. To mitigate the risk of AD when using ADMTs, employers should:

  • Understand the functions of the tools, including the data analyzed and how it is used;
  • Regularly evaluate and test the tools for potential discriminatory results (i.e., conduct disparate impact analyses); and

Ensure vendors have designed and tested the tools to avoid potential discriminatory outcomes before the tools are deployed.


SIGN UP

SIGN UP NOW to receive time sensitive employment law alerts and invitations to complimentary informational webinars and seminars.

"*" indicates required fields

By clicking this button and submitting information to us, you will be submitting certain personally identifiable information, or information which used together with other information, can be used to identify you and/or identify information about you, to Nukk-Freeman & Cerra, PC (“NFC”). Such information may be used by NFC to contact or identify you. Personally identifiable information may include, but is not limited to, your [name, phone number, address and/or] email address. We collect this information for the purpose of providing services, identifying and communicating with you, responding to your requests/inquiries, and improving our services. We may use your personally identifiable Information to contact you with time sensitive employment law e-alerts, marketing or promotional offers, invitations to complimentary and informational webinars and seminars, and other information that may be of interest to you. However, by providing any of the foregoing information to you, we are not creating an attorney-client relationship between you and NFC: nor are we providing legal advice to you. You may opt out of receiving any, or all, of these communications from us by following the unsubscribe link in any email we send. However, this will not unsubscribe you from receiving future communications from us which are based upon an independent request, relationship or act by you.