In 2022, the Washington Post highlighted new research showing that redlining, a practice of home lending discrimination based on race-based population data that wasn’t made illegal in the United States until 1968, has not only led to lasting patterns of residential segregation and wealth disparities, but disproportionate exposure to harmful pollutants.
And a bevy of recent studies have documented how Black Americans continue to suffer discrimination, intentional and unintentional, due to flawed data practices. Professor Anita Allen describes a “Black Opticon” of “discriminatory oversurveillance, discriminatory exclusion, and discriminatory predation” while current FTC Commissioner Rebecca Kelly Slaughter documents ongoing harms in the fields of employment, credit, health care, and housing associated with algorithmic decision-making.
Black History Month provides a moment to reckon with these historical and persistent injustices, which we term collectively “data-intensive racial injustice” – and most importantly, an opportunity to reflect on how to end them. A vigorous debate is taking place: is the answer new regulations, greater use of enforcement authorities under existing civil rights laws, new enforcement capabilities, or some combination of all the above? Do we need new privacy, AI, and other data-focused laws and regulations that specifically address civil rights harms? And should we focus on regulating sensitive data, or instead focus on regulating sensitive uses? There are diverse and important perspectives to be considered on all these questions.
The Critical Role of Accountability Frameworks
A major part of the solution to data-intensive racial injustice is for organizations to adopt and implement accountability frameworks through which they actively assess and mitigate risks to individuals, provide transparency on their practices to stakeholders, and monitor and verify for effectiveness on an ongoing basis. Below is an example of an accountability framework developed by CIPL.
It is essential that accountability frameworks explicitly incorporate into their risk assessments screening for fairness and for unjust racial discrimination – intentional or not. For example, the AI Risk Management Framework includes “ fair – with harmful bias managed” as a core element of trustworthy AI systems. It describes three specific categories of AI bias: “systemic, computational and statistical, and human-cognitive,” noting that “[e]ach of these can occur in the absence of prejudice, partiality, or discriminatory intent.”
There must equally be a commitment on the part of organizations to mitigate potential harms once identified in the risk assessment process; provide individuals with transparency around the decision making process and the ability to contest and correct the decision; and ultimately to provide redress when harms do occur. As described by Commissioner Slaughter:
“Prioritizing transparency and fairness is necessary, but not sufficient; regulation of algorithmic decision-making must also involve real accountability and appropriate remedies. Increased accountability means that companies—the same ones that benefit from the advantages and efficiencies of algorithms—must bear the responsibility of (1) conducting regular audits and impact assessments and (2) facilitating appropriate redress for erroneous or unfair algorithmic decisions.”
Finally, as the debates over appropriate approaches to reducing data-intensive racial injustice continue, it is vital that the voices of communities most affected by such harms be well-represented in the discussions. As staff at the think tank New America wrote in a 2019 report, “to ensure that the debate on privacy centers perspectives from marginalized communities, the tech policy community needs to reflect the country’s diversity. Including more voices in the policy debate leads to better policy solutions.”
CIPL is committed to advancing the quest for solutions to data-intensive racial injustice by participating constructively in ongoing policy debates, sharing our expertise on solutions such as risk-based accountability frameworks, and lifting up diverse perspectives on this challenging but critically important topic.
 It is important to note that such harms are not limited to the United States. For example, in the Netherlands, a child care benefit scandal significantly harmed low income families and those of ethnic minorities for years due to unchecked bias in the AI algorithm.
 The U.S. National Telecommunications and Information Administration (NTIA) has opened a consultation on this topic that will surely further enrich this debate.
 More CIPL writing and research on organizational accountability may be found here.
 CIPL Recommendations for a Risk Based Approach to regulating AI, p. 6.
 See, for example, GDPR Articles 22 on Automated Decision Making, 24 on Controller Responsibilities, and 35 on Data Protection Impact Assessment.
 NIST, AI Risk Management Framework 1.0, p. 18.
 Rebecca Kelly Slaughter, Algorithms and Economic Justice, 51.