August 24, 2023

EEOC Settles First Discrimination Suit Based on Use of AI Software

The Equal Employment Opportunity Commission (EEOC) has just settled its first discrimination suit based on the use of artificial intelligence (AI) in the hiring process. The case EEOC v. iTutorGroup offers employers a new cautionary tale on the use of artificial intelligence tools in the workplace.

The iTutorGroup allegedly violated federal anti-discrimination laws by programming its online recruitment software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older. All told, iTutorGroup rejected more than 200 qualified U.S. applicants because of their age.

The iTutorGroup’s $365,000 settlement highlights the EEOC’s commitment to ensuring technologies used in hiring and other employment decisions comply with anti-discrimination laws enforced by the EEOC. The Agency has produced several informative resources under its recently launched Artificial Intelligence and Algorithmic Fairness Initiative.

AI decision-making tools might intentionally or unintentionally screen-out individuals with disabilities or those who are members of a protected group during the application process and when employees are on the job. Some examples of the types of AI tools most commonly used in the workplace include:

  • Resume scanners that prioritize applications using certain keywords;
  • Employee monitoring software that rates employees on the basis of their keystrokes or other factors;
  • “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • Video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  • Testing software that provides “job fit” scores for applicants/employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.

AI decision-making tools used in a way that causes an adverse or desperate impact on individuals of a particular protected group will violate anti-discrimination laws (e.g., Title VII or ADEA) unless the employer can show that its use is “job related and consistent with business necessity.” Below are a few key practices for lowering your organization’s risk when it comes to using AI tools in the workplace:

  • Take steps to provide information about how the technology used by the organization evaluates applicants/employees.
  • Review all pre-defined qualifications utilized by AI decision-making tools to determine whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group (e.g., women v. men; older v. younger individuals)
  • Provide instructions on how applicants/employees can request a reasonable accommodation and respond appropriately to all requests for reasonable accommodations related to the organization’s use of AI decision-making tools.