May 23, 2022
On May 12, 2022, more than six months after the Equal Employment Opportunity Commission (“EEOC”) announced its Initiative on Artificial Intelligence and Algorithmic Fairness,[1] the agency issued its first guidance regarding employers’ use of Artificial Intelligence (“AI”).[2]
The EEOC’s guidance outlines best practices and key considerations that, in the EEOC’s view, help ensure that employment tools do not disadvantage applicants or employees with disabilities in violation of the Americans with Disabilities Act (“ADA”). Notably, the guidance came just one week after the EEOC filed a complaint against a software company alleging intentional discrimination through applicant software under the Age Discrimination in Employment Act (“ADEA”), potentially signaling more AI and algorithmic-based enforcement actions to come.
The EEOC’s AI Guidance
The EEOC’s non-binding, technical guidance provides suggested guardrails for employers on the use of AI technologies in their hiring and workforce management systems.
Broad Scope. The EEOC’s guidance encompasses a broad-range of technology that incorporates algorithmic decision-making, including “automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.”[3] As an example of such software that has been frequently used by employers, the EEOC identifies testing software that provides algorithmically-generated personality-based “job fit” or “cultural fit” scores for applicants or employees.
Responsibility for Vendor Technology. Even if an outside vendor designs or administers the AI technology, the EEOC’s guidance suggests that employers will be held responsible under the ADA if the use of the tool results in discrimination against individuals with disabilities. Specifically, the guidance states that “employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.”[4] The guidance further states that an employer may also be liable if a vendor administering the tool on the employer’s behalf fails to provide a required accommodation.
Common Ways AI Might Violate the ADA. The EEOC’s guidance outlines the following three ways in which an employer’s tools may, in the EEOC’s view, be found to violate the ADA, although the list is non-exhaustive and intended to be illustrative:
- By relying on the tool, the employer fails to provide a reasonable accommodation. Individuals with disabilities may need “specialized equipment” or “alternative tests or formats” to ensure that they are accurately assessed. For example, the EEOC notes that an applicant with limited manual dexterity may have a difficult time taking a knowledge test which utilizes a manual input device such as a keyboard or trackpad. The EEOC’s guidance states that, absent an undue hardship, the applicant should be provided with an alternative version of the test (e.g., a test allowing oral responses).
- The tool screens out an individual with a disability that is able to perform the essential functions of the job with or without an accommodation. Whether intentional or inadvertent, “screening out” may arise from a variety of factors, such as special circumstances not being taken into account in designing the algorithmic decision-making tool. For example, if a video interviewing tool analyzes speech patterns to determine an applicant’s problem solving abilities, it may screen out an individual with a speech impediment because their speech deviates from expected patterns and may therefore receive a low or disqualifying score. The EEOC’s guidance offers a separate example of how a personality test seeking to measure workplace focus may negatively score an individual with Posttraumatic Stress Disorder who is not able to ignore distractions. While this test would generally be predictive and valid, the guidance states that “it might not accurately predict whether the individual still would experience those same difficulties under modified working conditions such as a quiet workstation or permission to use noise-cancelling headphones.”[5]
- The tool makes a disability-related inquiry or otherwise constitutes a medical examination. An AI tool that asks questions about an individual’s medical conditions or physical restrictions, or overtly asks if the individual has a disability, may violate the ADA’s prohibition on making disability-related inquiries. Similarly, a tool’s assessment of an employee or applicant may constitute an impermissible medical examination if it “seeks information about an individual’s physical or mental impairments or health.” The EEOC’s guidance attempts to clarify its recommendations with examples—stating that AI screening tools may lawfully pose questions to applicants and employees that “might somehow be related to some kinds of mental health diagnoses,” such as whether the individuals are optimistic about the future. However, if the AI tool’s use of this question screens out an individual because of a disability (e.g., Major Depressive Disorder), it may nevertheless be found to violate the ADA since the tool would ultimately disqualify an applicant that may otherwise be able to perform the essential functions of the job with or without an accommodation.[6] While this example is fairly nuanced, it provides insight into how the EEOC may scrutinize the use of AI in the workplace.
Tips for Avoiding Pitfalls. In addition to illustrating the agency’s view of how employers may run afoul of the ADA through their use of AI and algorithmic decision-making technology, the EEOC’s guidance provides several practical tips for how employers may reduce the risk of liability. For example:
- Make the Accommodations Process Transparent. The EEOC recommends that employers make clear in writing that applicants and employees can request reasonable accommodations and provide clear instructions on how they can do so.
- Give Notice Before Performing AI Assessments. The EEOC suggests that employers provide all applicants and employees undergoing an assessment by an algorithmic decision-making tool information “in plain language and in accessible formats” regarding “the traits that the algorithm is designed to assess, the method by which those traits are assessed, and the variables or factors that may affect the rating.”[7] Illinois already requires employers using AI analysis in video interviewing to notify applicants of how the AI tool works and what characteristics will be used to evaluate them. Likewise, effective January 1, 2023, employers in New York City will be required to provide applicants and employees with notices that explain how the tool works and what job qualifications and characteristics are being considered.[8]
- Focus on Essential Functions. The EEOC recommends ensuring that the AI and algorithmic tools “only measure abilities or qualifications that are truly necessary for the job—even for people who are entitled to an on-the-job reasonable accommodation” and measure those necessary qualifications “directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications.”[9]
- Confirm Vendor Compliance. For employers purchasing tools from vendors, the EEOC suggests that an employer “confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation.”[10] Employers in New York City should take note that the new NYC law will require employers to conduct an independent bias audit to ensure there is no adverse impact on the basis of race, ethnicity, and sex. Recently proposed federal and D.C. laws, if enacted, would require a yearly bias audit covering the full spectrum of protected classes.
Enforcement Action
As previewed above, on May 5, 2022—just one week before releasing its guidance—the EEOC filed a complaint in the Eastern District of New York alleging that iTutorGroup, Inc., a software company providing online English-language tutoring to adults and children in China, violated the ADEA.[11]
The complaint alleges that a class of plaintiffs were denied employment as tutors because of their age. Specifically, the EEOC asserts that the company’s application software automatically denied hundreds of older, qualified applicants by soliciting applicant birthdates and automatically rejecting female applicants age 55 or older and male applicants age 60 or older. The complaint alleges that the charging party was rejected when she used her real birthdate because she was over the age of 55 but was offered an interview when she used a more recent date of birth with an otherwise identical application. The EEOC seeks a range of damages including back wages, liquidated damages, a permanent injunction enjoining the challenged hiring practice, and the implementation of policies, practices, and programs providing equal employment opportunities for individuals 40 years of age and older. iTutorGroup has not yet filed a response to the complaint.
Takeaways
Given the EEOC’s enforcement action and recent guidance, employers should evaluate their current and contemplated AI tools for potential risk. In addition to consulting with vendors who design or administer these tools to understand the traits being measured and types of information gathered, employers might also consider reviewing their accommodations processes for both applicants and employees.
___________________________
[1] EEOC, EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness (Oct. 28, 2021), available at https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness.
[2] EEOC, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022), available at https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence?utm_content=&utm_medium=email&utm_name=&utm_source=govdelivery&utm_term [hereinafter EEOC AI Guidance].
[3] Id.
[4] Id. at 3, 7.
[5] Id. at 11.
[6] Id. at 13.
[7] Id. at 14.
[8] For more information, please see Gibson Dunn’s Client Alert, New York City Enacts Law Restricting Use of Artificial Intelligence in Employment Decisions.
[9] EEOC AI Guidance at 14.
[10] Id.
[11] EEOC v. iTutorGroup, Inc., No. 1:22-cv-02565 (E.D.N.Y. May 5, 2022).
The following Gibson Dunn attorneys assisted in preparing this client update: Harris Mufson, Danielle Moss, Megan Cooney, and Emily Maxim Lamm.
Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. To learn more about these issues, please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Labor and Employment practice group, or the following:
Harris M. Mufson – New York (+1 212-351-3805, hmufson@gibsondunn.com)
Danielle J. Moss – New York (+1 212-351-6338, dmoss@gibsondunn.com)
Megan Cooney – Orange County (+1 949-451-4087, mcooney@gibsondunn.com)
Jason C. Schwartz – Co-Chair, Labor & Employment Group, Washington, D.C.
(+1 202-955-8242, jschwartz@gibsondunn.com)
Katherine V.A. Smith – Co-Chair, Labor & Employment Group, Los Angeles
(+1 213-229-7107, ksmith@gibsondunn.com)
© 2022 Gibson, Dunn & Crutcher LLP
Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.
"and" - Google News
May 24, 2022 at 03:13AM
https://ift.tt/slbmPxn
Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action - Gibson Dunn
"and" - Google News
https://ift.tt/72M56UH
https://ift.tt/Ug23xiF
And
Bagikan Berita Ini
0 Response to "Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action - Gibson Dunn"
Post a Comment