Can a Machine Really Discriminate?

The practice of redlining in housing emerged nearly a century ago. Regrettably, various forms of intentional discrimination or disparate treatment persist today. Although government-sanctioned redlining1 no longer exists, the systemic effects of discriminatory policies in housing and finance are still felt in many ways by people of color, as evidenced by disparities in rates of homeownership,2 for example. In the age of artificial intelligence and machine learning, the risk posed by a more subtle form of discrimination known as disparate impact has increased. Although one might assume that reducing reliance on human decision-makers and increasing reliance on artificial intelligence reduces bias, an outcomes-based approach tells us that this is not the full picture. The question we will explore here is—can a machine really discriminate?

We begin with some working definitions. “Artificial intelligence” generally means the ability of a computer, or a robot controlled by a computer, to do tasks that require discernment and are usually done by humans. “Machine learning” is a type of artificial intelligence and is the ability of that computer to learn and adapt by using algorithms and statistical models to analyze and draw inferences from patterns in data. “Disparate impact” is when a facially neutral policy is discriminatory in effect.

Artificial intelligence, including machine learning, is in widespread use today. As artificial intelligence becomes more ubiquitous, we often interact with it daily in some form, sometimes without even knowing it. Banks and other financial institutions use artificial intelligence to make credit and lending decisions. The benefits to such institutions include the increased speed of decision-making and resulting efficiencies, which often translate to cost savings and more bottom- line revenue for the institution.

However, the last several years of data have shown that dependence on artificial intelligence has its drawbacks too. Relevant to our inquiry, in housing and finance, the use of artificial intelligence can result in unintended consequences that pose the risk of unlawful discrimination.3 Simply stated, a machine really can discriminate if it is taught to do so. The reason is that humans program computers to make decisions and include various inputs that are not free from bias, whether explicit or implicit. Therefore, relying on potentially biased inputs, the “black box” of a machine can manifest these biases in its outputs or decisions. Furthermore, a machine can even learn to discriminate. Through machine learning, the bias of prior decisions is perpetuated because the algorithms recognize patterns from historic data and big datasets and seek to replicate those outcomes. This is particularly pernicious in housing and finance, where we historically know that discriminatory policies were adopted to intentionally disadvantage African Americans.4

Detecting discrimination gets especially complicated when a computer program or algorithm considers potentially hundreds or thousands of data points. Depending on how the data interact and are weighted, some inputs may be considered more important than others.5 And although race may not be explicitly relied upon, there are often other inputs that serve as a proxy for race, for example, zip code and last name.6 In highly regulated sectors such as housing and finance, the risk of unlawful discrimination is heightened when financial institutions come to rely on computer programs and algorithms to make credit decisions.

Also, the risk of discrimination potentially increases when we come to rely on artificial intelligence without sufficient scrutiny of the mistaken belief that a machine is incapable of harboring biases. The development of machine learning itself makes it difficult to “pull back the curtain” and can result in a lack of “explainability” (i.e., determining how a model uses inputs to produce outputs). Although human developers initially design algorithms, the models often ultimately relied upon in decision-making are developed through the use of machine learning. Therefore, these models may be detecting correlations in the data that are problematic from a discrimination point of view that the human developer may not even see or know exist.7

One example in the employment realm showcases the limitations of machine learning and, if left unchecked, the risk of unlawful discrimination. One large tech company used artificial intelligence and machine learning to evaluate job applicants. The computer program relied on the company’s hiring patterns from the prior 10 years. Although the use of an algorithm was intended to streamline and remove subjective elements from the hiring process, the company came to learn that for software developer and other technical positions, the computer program was not rating candidates in a gender-neutral way. The computer model effectively taught itself to embed an irrational or discriminatory preference for male candidates over female candidates.8

Studies have quantified the impact of unlawful bias in mortgage lending, which increasingly relies on algorithms and artificial intelligence to predict the likelihood of borrower repayment and craft loan terms. A recent study from the University of California, Berkeley found that lenders charge otherwise-equivalent Hispanic and African American borrowers higher rates for mortgages, costing these consumers of color a whopping $765 million extra in interest per year. While recognizing that “fully automated underwriting may reduce the incidence of discrimination in loan rejections,”9 the study showed that from 2009 to 2015, up to 1.3 million applications by Hispanic and African American borrowers were rejected due to discrimination. Another study by a mortgage company also showed that African American borrowers have the highest mortgage denial rates, at 17.4 percent, while non-Hispanic whites have the lowest, at 7.9 percent.10

The risks posed by increased reliance on artificial intelligence and the related liabilities have caught the attention of regulators11 and contentious companies12 alike. Arguably, business interests (i.e., increased lending to a larger customer base with the aim of greater revenue) align with the legal requirement to pursue less discriminatory alternatives and the moral imperative to root out discrimination. As regulated entities move toward newer technologies, the government is forced to answer these important questions and develop novel applications of older laws to effectuate their anti-discrimination mandates.

Rethinking the current approach is necessary, with some calling to rely on alternative sources of data, as it is clear that artificial intelligence as a technology has a lot to offer and is here to stay. The first step is to be aware of this potential for bias, and the second is to demand more transparency. Furthermore, this moment presents an opportunity to examine our own implicit biases as a society and as individuals. Reducing human bias in the design of artificial intelligence will go a long way to promoting fairer outcomes by the otherwise neutral machines that we have tasked with decision-making in many important sectors. The machines are watching—will we rise to meet the challenge?

Endnotes

1 “Throughout much of the 20th century, the Federal Government systematically supported discrimination and exclusion in housing and mortgage lending.” Memorandum on Redressing Our Nation’s and the Federal Government’s History of Discriminatory Housing Practices and Policies, The White House ( Jan. 26, 2021), https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/26/memorandum-on-redressing-our-nations-and-the-federalgovernments-history-of-discriminatory-housing-practices-andpolicies/; see Richard Rothstein, The Color of Law: A Forgotten History of How Our Government Segregated America (2017).

2 Homeownership rates for African Americans are stagnant and declining over the last 20 years. See Reducing the Racial Homeownership Gap: The Data on Black Homeownership, Urb. Inst., https://www.urban.org/policy-centers/housing financepolicy-center/projects/reducing-racial-homeownership-gap/datablack-homeownership (last visited Aug. 11, 2022); Dedrick Asante-Muhammad et al., 60% Black Homeownership: A Radical Goal for Black Wealth Development, Nat’l Cmty. Reinvestment Coal. (Mar. 2, 2021), https://ncrc.org/60-black-homeownership-a-radicalgoal-for-black-wealth-development/.

3 The Equal Credit Opportunity Act prohibits discrimination on the basis of race, color, religion, national origin, sex, marital status, or age in credit transactions. 15 U.S.C. § 1691(a)(1) (2020). The Fair Housing Act prohibits discrimination on the basis of race, color, religion, sex, familial status, national origin, or disability. 42 U.S.C. § 3604 (2012). The Supreme Court recognized disparate impact liability under the Fair Housing Act in Texas Department of Housing & Community Affairs v. Inclusive Communities Project, Inc., 576 U.S.519 (2015).

4 See sources cited supra note 1.

5 See Andrew Burt, How to Fight Discrimination in AI, Harv. Bus. Rev. (Aug. 28, 2020), https://hbr.org/2020/08/how-to-fightdiscrimination-in-ai (“In a society shaped by profound systemic inequities such as that of the United States, disparities can be so deeply embedded that it oftentimes requires painstaking work to fully separate what variables (if any) operate independently from protected attributes.”).

6 See Consumer Fin. Prot. Bureau, Using Publicly Available Information to Proxy for Unidentified Race and Ethnicity: A Methodology and Assessment (Summer2014), https://files.consumerfinance.gov/f/201409_cfpb_report_proxymethodology. pdf.

7 See Michael Kearns & Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design (2019).

8 Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women (Oct. 10, 2018), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazonscraps-secret-ai-recruiting-tool-that-showed-bias-against-womenidUSKCN1MK08G.

9 Robert Bartlett et al., Consumer-Lending Discrimination in the FinTech Era (Nov. 2019), https://faculty.haas.berkeley.edu/morse/research/papers/discrim.pdf (emphasis added).

10 Tendayi Kapfidze, LendingTree Analysis Reveals Mortgage Denials at Cycle Low, LendingTree (Oct. 7, 2019), https://www.lendingtree.com/mortgage-denials-at-cycle-low/.

11 See Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning, 86 Fed. Reg. 16837 (Mar. 31, 2021), https://www.govinfo.gov/content/pkg/FR-2021-03-31/pdf/2021-06607.pdf. At the time of this writing, the comment period was extended, but no final action has been taken. See Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning, 86 Fed. Reg. 27960 (May 24, 2021), https://www.govinfo.gov/content/pkg/FR-2021-05-24/pdf/2021-10861.pdf.

12 See Kate Berry, Fintechs Seek CFPB Guidance on Making AI-Based Lending Fair, Am. Banker ( June 29, 2021, 4:39 PM), https://www.americanbanker.com/news/fintechs-seek-cfpb-guidance-onmaking-ai-based-lending-fair.

About the Author

Hon. Dania Ayoubi serves as an administrative law judge at the Maryland Office of Administrative Hearings (OAH), where she presides over appeals from over 30 state agencies and serves as a certified mediator. Prior to joining the OAH, Judge Ayoubi served in federal government for nearly 10 years. She clerked for former Chief Judge Eric T. Washington at the District of Columbia Court of Appeals and is a graduate of Georgetown University and Georgetown University Law Center. She was recently selected as a recipient of the Montgomery County Commission for Women’s 2022 Women Making History Award. Judge Ayoubi is a member of the National Association of Women Judges and serves on the Advisory Council for Muslim Americans in Public Service. ©2022 Hon. Dania Ayoubi. All rights reserved.

About the FBA

Founded in 1920, the Federal Bar Association is dedicated to the advancement of the science of jurisprudence and to promoting the welfare, interests, education, and professional development of all attorneys involved in federal law. Our more than 16,000 members run the gamut of federal practice: attorneys practicing in small to large legal firms, attorneys in corporations and federal agencies, and members of the judiciary. The FBA is the catalyst for communication between the bar and the bench, as well as the private and public sectors. Visit us at fedbar.org to learn more.