Will Artificial Intelligence End Employment Bias – and Discrimination Lawsuits?
6.17.2021
Artificial intelligence is infiltrating nearly every aspect of our lives, from determining what ads we see on our phones to helping health care providers diagnose and triage admitted patients in order to aid in decision-making about how to allocate limited resources. But can AI also eliminate bias in hiring? If so, would that render all future discrimination claims moot?
New York City may provide some answers to these questions if the City Council approves a local law sponsored by Councilwoman Laurie Cumbo that would regulate automated decision making to alleviate systemic inequities, promote inclusiveness in hiring and ensure diversity in the city’s workforce. Called the Sale of Automated Employment Decision Tools (SAEDT), the law is an anti-discriminatory algorithmic accountability (AA) measure that is aimed at fighting all forms of discrimination, whether intentional or not, and raising anti-bias awareness in the use of artificial intelligence hiring applications by private employers.
Although New York City has some of the nation’s strongest anti-discrimination hiring laws, the data show that more work needs to be done. In 2018, New York City pioneered nationwide efforts to address inadequate regulations and oversight of ADMs by assembling a task force[1] to review the use of ADM tools in the city by the public agencies. The task force issued a comprehensive report[2] indicating that transparency and accountability in the AI-powered decision-making tools are key to bringing more fairness and diminishing the risks of biases and privacy harm in the use of ADMs. Despite all the current regulations and measures, New York City is taking to mitigate an increasingly worrisome correlation between, e.g., race and the unemployment rate in the city and joblessness rates remaining higher among residents of color.
A recent study[3] by the Community Service Society of New York (CSS) pointed out that the COVID-19 pandemic hit certain segments of New York City’s diverse population harder than others. In June of 2020, CSS data showed that unemployment rates among Asian, Black and Latinx New York City residents were 21.1%, 23.7%, and 22.7%, respectively, compared to 13.9% among white New Yorkers. One of the ultimate causes of such disparities is the persistent systemic discrimination[4] that prevents certain demographics from participating in the labor market. With adequate controls in place, AI-powered hiring tools may improve the odds of marginalized segments of society getting more equitable access to employment opportunities.
Why Regulating Automated Hiring Tools Matters
On January 12, 2021, HireVue, a leading provider of software for vetting job candidates, announced that it no longer uses a visual analysis component in its video interviewing automated hiring tool, which analyzed candidates’ facial expressions and biometric characteristics to help hiring managers to assess candidates’ viability for the roles and aid in the decision-making process. How the use of such visual analysis affected the likelihood of people of different color, gender or national origin of getting a job is not entirely clear.[5] More research is warranted to ensure that using such tools will not disproportionately negatively affect certain population segments in the job market. HireVue’s decision is a warning to the other users of AI tools and a reminder that users should only employ such applications after completing appropriate bias and risk assessments and mitigating those identified risks to the minimum.
AI-powered ADM hiring tools typically learn how to complete the intended task (e.g., find the best candidate for the role) by being trained with data from past employees who represent high performers in similar roles. AI analyzes past employees’ personal characteristics (e.g., race, marital status or gender) and creates a profile of the ideal candidate for the role. Using this method without proper risk mitigation controls (e.g., bias risk and impact assessments) may result in further underrepresentation in the hiring of certain segments of the population – for example, Latina women in the financial sector or Black men in IT. Underrepresentation happens because the training data used to train ADM tools does not recognize Black men in IT as high performers simply because there were not enough Black men who occupied these roles in the past. The lack of such data may result in an unfair sampling and put Black men at a competitive disadvantage, in turn decreasing their probability of being picked by AI as a preferred candidate for a role compared to white men, who have typically occupied these roles in the past.
How SAEDT Will Bring More Fairness to Hiring
If passed into law, SAEDT will take effect on January 1, 2022 and will require businesses to tell job candidates within 30 days if they use ADM systems to filter applicants or use such tools to determine any other term, condition or privilege of employment. It would also prohibit the sale of ADM tools within the city limits to the employers – the users of ADMs – unless its developers or sellers comply with the following three compliance measures: (1) complete a bias audit in the year prior to sale; (2) provide assurance that bias audits will be completed annually at no additional cost to the user of ADMl; and (3) notify the user of ADM that the tool is subject to the provisions of SAEDT.
Developers and users of ADMs that do not comply with this law will incur penalties. All these compliance steps are part of the algorithmic accountability measures intended to mitigate the risk of prohibited discriminatory practices and ensure equal employment opportunities without regard to an applicant’s race, gender, national origin or other protected identifier.
Algorithmic Accountability
Simply defined, algorithmic accountability (AA) is a policy measure aimed at holding developers of ADMs accountable for unfair outcomes of the AI-powered tools they develop. For the last decade, new risks posed by the rapid innovation of ADM-related hiring tools have been on the radar of the Equal Employment Opportunity Commission, the federal agency charged with enforcing federal laws prohibiting employment discrimination. Back in 2016,[6] in a public meeting with a panel of experts held by the EEOC, Chair Jenny R. Yang warned that “it is critical that these [hiring] tools are designed to promote fairness and opportunity so that reliance on these expanding sources of data does not create new barriers to opportunity.” The EEOC made it clear that employers using AI-powered tools may be held accountable for the discriminatory outcomes even if the discriminatory outcomes are unintended. Despite the preemptive efforts by the EEOC to improve the regulatory environment, there is currently no overarching federal law regulating the use of hiring tools in the U.S.
SAEDT’s proposed bill is a far-reaching attempt to ensure that ADMs are not discriminatory and a sound example of local government’s proactive efforts to bring more transparency in the use of AI-powered technology. Bias audit reports indicating that the system was properly tested and all identified issues were promptly addressed by the user of ADM may become a useful tool for employers when defending legal claims brought by job candidates who were not hired or by current employees who were not promoted or were denied other benefits related to the employment and who allege they were disproportionately affected by the ADM hiring tools. It remains to be seen how much weight the courts will put on the audit reports and whether it will be enough to defeat a discriminatory employment practice claim. It is likely that mandating bias reports will make it more difficult for the plaintiffs claiming that AI-powered ADMs are biased against them to prove their case because employers may argue that bias assessments confirmed that hiring tools they use are free of biases. As with other legal standards in employment discrimination cases developed by the courts over the years (Griggs v. Duke Power Co., 401 U.S. 424 (1971)), Dothard v. Rawlinson, 433 U.S. 321 (1977)), new legal standards are indispensable to adapt to the new challenges posed by new technology.
AI Application Regulation
ADM developers and/or users are expected to complete at least two compliance steps. The first step is to provide disclosure notices to the public explaining why the users of ADMs collect and process certain protected identifiers, e.g., actual or perceived race, color, ethnicity, national origin, religion, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income or disability of the individual. Such notices may also provide assurances that ADM will not misuse protected identifiers and provide specific measures to mitigate the risk of misuse. The second step involves performing some type of risk assessment (e.g., pre-deployment risk assessments and post-deployment impact evaluations) to identify issues with respect to the accuracy and bias of ADM. Once identified, these issues must be addressed without delay. The primary goal of AA measures in SAEDT is to ensure that when a potential employer asks a job applicant to submit a protected identifier (e.g., race), such information is not misused by the ADM in a discriminatory manner or used to the job applicant’s disadvantage with respect to any term of employment.
The proposed SAEDT bill does not specify requirements or provide guidelines of what the regulator expects to be included in the mandated bias risk assessment report. Assessments may be an effective tool to achieve acceptable levels of AA. However, the regulators cannot leave it to the industry to self-regulate and determine, for example, how detailed assessments should be or what methods may be used to assess the risks associated with using the tools. It is promising that under § 20-844 of the SAEDT, local agencies, including the New York City Commission on Human Rights, are directed to create guidance and identify best practices for the proposed bias assessments.
Mandates for assessments without an accompanying roadmap on how to complete them may end up defeating their purpose. In January 2021, Open Loop,[7] an innovative policy research initiative, completed an AI impact assessment prototyping testing with several current users of AI-powered applications. Open Loop issued a report titled “AI Impact Assessment: Policy Prototyping Experiment” (OL Report),[8] which underscored several recommendations to the developers and operators of AI applications for completing ADM risk assessments. The OL Report emphasized the importance for regulators to provide specific standards and detailed guidance, preferably with hypotheticals on implementing ADM assessment processes to be released simultaneously with the requirement to complete assessments. It also highlighted the need to improve documenting risk mitigation processes following the assessment, including why the specific risk-mitigating measures were taken and how these measures reduced the risk to the acceptable level or removed it.
At the core of AA policies is an effort to bring more fairness and transparency to data processing. There are too many examples of people in protected classes unfairly denied opportunities without fair consideration. Why? One reason is that the AI-run gatekeepers like ADMs eliminate the consideration step. When individuals provide their protected class identifying information to the data processors, they expect, at a minimum, fair and non-discriminatory treatment when the AI-powered tools process such information. Through mandated risk assessments and public disclosures, AA is one way to ensure that protected data is not misused.
Adomas Siudika is of counsel with Boodell & Domanskis and is a privacy consultant with Termageddon in Chicago. He is a licensed attorney in New York and holds the Certified Information Privacy Certification (CIPP/US) designation from International Association of Privacy Professionals. He served as vice president of the Lithuanian American Bar Association. Siudika received his LL.M cum laude from Indiana University Robert H. McKinney School of Law.
[1]. https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=3137815&GUID=437A6A6D-62E1-47E2-9C42-461253F9C6D0.
[2]. New York City Automated Decision Systems Task Force Report, Nov. 2019, https://www1.nyc.gov/assets/adstaskforce/downloads/pdf/ADS-Report-11192019.pdf.
[3]. Irene Lew, Race and the Economic Fallout from COVID-19 in New York City, Community Service Society, July 30, 2020, https://www.cssny.org/news/entry/race-and-the-economic-fallout-from-covid-19-in-new-york-city.
[4]. Harry J. Holzer, Why Are Employment Rates So Low Among Black Men?, Brookings Institute, March 1, 2021, https://www.brookings.edu/research/why-are-employment-rates-so-low-among-black-men.
[5]. Will Knight, Job Screening Service Halts Facial Analysis of Applicants, Wired, Jan. 12, 2021, https://www.wired.com/story/job-screening-service-halts-facial-analysis-applicants.
[6]. Use of Big Data Has Implications for Equal Opportunity, Panel Tells EEOC, U.S. Equal Opportunity Commission, Oct. 13, 2016, https://www.eeoc.gov/newsroom/use-big-data-has-implications-equal-employment-opportunity-panel-tells-eeoc.
[8]. Norberto Nuno Gomez de Andrade and Verena Kontschieder, AI Impact Assessment: A Policy Prototyping Experiment, IE Law School, Stanford Law School, Center for Internet & Society, Jan. 1, 2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3772500.