The United States Equal Employment Opportunity Commission (EEOC) and three other federal agencies issued a joint statement vowing to use existing laws to protect employees and the general public from discrimination and bias arising from the use of artificial intelligence (AI). The agencies joining the EEOC in the April 25, 2023, statement include the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division (Division), and the Federal Trade Commission (FTC).
The statement refers to the EEOC’s technical assistance document, which explains how the Americans with Disabilities Act (ADA) applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees. The document states that the most common ways that an employer’s use of algorithmic decision-making tools could violate the ADA are:
- The employer does not provide a reasonable accommodation that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.
- The employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation.
- The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.
The EEOC’s document also warns that employers will be responsible for the use of AI even if the AI was developed by an outside vendor. Further, employers may be held responsible for agents such as software vendors, if the employer has given the agent the authority to act on the employer’s behalf. The guidance suggests that if an agent is utilizing AI on the employer’s behalf, employers should ask the entity to forward all requests for accommodation promptly to be processed by the employer in accordance with ADA requirements.
In addition to the joint statement, the Biden Administration announced that Vice President Kamala Harris met with four CEOs from American tech companies to promote responsible use of AI that protects Americans’ rights and safety.
States and cities have also begun to regulate the use of AI in the workplace. For example, New York City’s law prohibits employers from using an “automated employment decision tool” to screen candidates or employees, unless the tool underwent a bias audit, and the employer made a summary of such results publicly available on its website prior to using the tool. The City will begin enforcement of the law on July 5, 2023. Further, the Illinois Artificial Intelligence Video Interview Act regulates use of video interview evaluation systems relying upon AI, including disclosure and consent requirements to applicants, and annual reporting to the state on applicant race and ethnicity, if relying solely on AI analysis of video interviews to select for in-person interviews. Finally, Maryland restricted employers from using facial recognition services for the purpose of creating a facial template during an applicant’s interview, unless the applicant consents in a written waiver.
Ballard Spahr’s Labor and Employment Group is monitoring developments regarding employers’ use of AI and routinely assists employers in developing policies, training and compliance measures. Ballard Spahr also regularly defends employers facing administrative actions and charges from the EEOC and other federal agencies.