In 2019, the New York Times reported the story of Nijeer Parks, a New Jersey man who spent a week in prison after police arrested him based on a false match using facial recognition technology. Facial recognition technology is demonstrably worse at recognizing Black faces than white ones, with dark-skinned women suffering some of the poorest results. While developers of facial recognition technologies have been working hard to address these disparities, these concerns persist.
In 2022, the Washington Post highlighted new research showing that redlining, a practice of home lending discrimination based on race-based population data that wasn’t made illegal in the United States until 1968, has not only led to lasting patterns of residential segregation and wealth disparities, but disproportionate exposure to harmful pollutants.
And a bevy of recent studies have documented how Black Americans continue to suffer discrimination, intentional and unintentional, due to flawed data practices. Professor Anita Allen describes a “Black Opticon” of “discriminatory oversurveillance, discriminatory exclusion, and discriminatory predation” while current FTC Commissioner Rebecca Kelly Slaughter documents ongoing harms in the fields of employment, credit, health care, and housing associated with algorithmic decision-making.
Black History Month provides a moment to reckon with these historical and persistent injustices, which we term collectively “data-intensive racial injustice”[1] – and most importantly, an opportunity to reflect on how to end them. A vigorous debate is taking place: is the answer new regulations, greater use of enforcement authorities under existing civil rights laws, new enforcement capabilities, or some combination of all the above? Do we need new privacy, AI, and other data-focused laws and regulations that specifically address civil rights harms? And should we focus on regulating sensitive data, or instead focus on regulating sensitive uses? There are diverse and important perspectives to be considered on all these questions.[2]
The Critical Role of Accountability Frameworks
A major part of the solution to data-intensive racial injustice is for organizations to adopt and implement accountability frameworks through which they actively assess and mitigate risks to individuals, provide transparency on their practices to stakeholders, and monitor and verify for effectiveness on an ongoing basis. Below is an example of an accountability framework developed by CIPL.[3]
In 2022, the Washington Post highlighted new research showing that redlining, a practice of home lending discrimination based on race-based population data that wasn’t made illegal in the United States until 1968, has not only led to lasting patterns of residential segregation and wealth disparities, but disproportionate exposure to harmful pollutants.
And a bevy of recent studies have documented how Black Americans continue to suffer discrimination, intentional and unintentional, due to flawed data practices. Professor Anita Allen describes a “Black Opticon” of “discriminatory oversurveillance, discriminatory exclusion, and discriminatory predation” while current FTC Commissioner Rebecca Kelly Slaughter documents ongoing harms in the fields of employment, credit, health care, and housing associated with algorithmic decision-making.
Black History Month provides a moment to reckon with these historical and persistent injustices, which we term collectively “data-intensive racial injustice”[1] – and most importantly, an opportunity to reflect on how to end them. A vigorous debate is taking place: is the answer new regulations, greater use of enforcement authorities under existing civil rights laws, new enforcement capabilities, or some combination of all the above? Do we need new privacy, AI, and other data-focused laws and regulations that specifically address civil rights harms? And should we focus on regulating sensitive data, or instead focus on regulating sensitive uses? There are diverse and important perspectives to be considered on all these questions.[2]
The Critical Role of Accountability Frameworks
A major part of the solution to data-intensive racial injustice is for organizations to adopt and implement accountability frameworks through which they actively assess and mitigate risks to individuals, provide transparency on their practices to stakeholders, and monitor and verify for effectiveness on an ongoing basis. Below is an example of an accountability framework developed by CIPL.[3]
Source: CIPL
CIPL has advocated for the use of accountability frameworks for mitigating the risks of AI, including bias.[4] There are a variety of ways to promote adoption of risk-based accountability frameworks, from voluntary schemes like the new AI Risk Management Framework released by the U.S. National Institute of Standards and Technology (NIST) to more prescriptive approaches. The European Union enshrined organizational accountability into law in the General Data Protection Regulation,[5] and the American Data Privacy and Protection Act (ADPPA) introduced in Congress in 2022 contained an entire section (Title III) dedicated to Organizational Accountability.
It is essential that accountability frameworks explicitly incorporate into their risk assessments screening for fairness and for unjust racial discrimination – intentional or not. For example, the AI Risk Management Framework includes “ fair – with harmful bias managed” as a core element of trustworthy AI systems. It describes three specific categories of AI bias: “systemic, computational and statistical, and human-cognitive,” noting that “[e]ach of these can occur in the absence of prejudice, partiality, or discriminatory intent.”[6]
There must equally be a commitment on the part of organizations to mitigate potential harms once identified in the risk assessment process; provide individuals with transparency around the decision making process and the ability to contest and correct the decision; and ultimately to provide redress when harms do occur. As described by Commissioner Slaughter:
“Prioritizing transparency and fairness is necessary, but not sufficient; regulation of algorithmic decision-making must also involve real accountability and appropriate remedies. Increased accountability means that companies—the same ones that benefit from the advantages and efficiencies of algorithms—must bear the responsibility of (1) conducting regular audits and impact assessments and (2) facilitating appropriate redress for erroneous or unfair algorithmic decisions.”[7]
Finally, as the debates over appropriate approaches to reducing data-intensive racial injustice continue, it is vital that the voices of communities most affected by such harms be well-represented in the discussions. As staff at the think tank New America wrote in a 2019 report, “to ensure that the debate on privacy centers perspectives from marginalized communities, the tech policy community needs to reflect the country’s diversity. Including more voices in the policy debate leads to better policy solutions.”
CIPL is committed to advancing the quest for solutions to data-intensive racial injustice by participating constructively in ongoing policy debates, sharing our expertise on solutions such as risk-based accountability frameworks, and lifting up diverse perspectives on this challenging but critically important topic.
It is essential that accountability frameworks explicitly incorporate into their risk assessments screening for fairness and for unjust racial discrimination – intentional or not. For example, the AI Risk Management Framework includes “ fair – with harmful bias managed” as a core element of trustworthy AI systems. It describes three specific categories of AI bias: “systemic, computational and statistical, and human-cognitive,” noting that “[e]ach of these can occur in the absence of prejudice, partiality, or discriminatory intent.”[6]
There must equally be a commitment on the part of organizations to mitigate potential harms once identified in the risk assessment process; provide individuals with transparency around the decision making process and the ability to contest and correct the decision; and ultimately to provide redress when harms do occur. As described by Commissioner Slaughter:
“Prioritizing transparency and fairness is necessary, but not sufficient; regulation of algorithmic decision-making must also involve real accountability and appropriate remedies. Increased accountability means that companies—the same ones that benefit from the advantages and efficiencies of algorithms—must bear the responsibility of (1) conducting regular audits and impact assessments and (2) facilitating appropriate redress for erroneous or unfair algorithmic decisions.”[7]
Finally, as the debates over appropriate approaches to reducing data-intensive racial injustice continue, it is vital that the voices of communities most affected by such harms be well-represented in the discussions. As staff at the think tank New America wrote in a 2019 report, “to ensure that the debate on privacy centers perspectives from marginalized communities, the tech policy community needs to reflect the country’s diversity. Including more voices in the policy debate leads to better policy solutions.”
CIPL is committed to advancing the quest for solutions to data-intensive racial injustice by participating constructively in ongoing policy debates, sharing our expertise on solutions such as risk-based accountability frameworks, and lifting up diverse perspectives on this challenging but critically important topic.
Footnotes
[1] It is important to note that such harms are not limited to the United States. For example, in the Netherlands, a child care benefit scandal significantly harmed low income families and those of ethnic minorities for years due to unchecked bias in the AI algorithm.
[2] The U.S. National Telecommunications and Information Administration (NTIA) has opened a consultation on this topic that will surely further enrich this debate.
[3] More CIPL writing and research on organizational accountability may be found here.
[4] CIPL Recommendations for a Risk Based Approach to regulating AI, p. 6.
[5] See, for example, GDPR Articles 22 on Automated Decision Making, 24 on Controller Responsibilities, and 35 on Data Protection Impact Assessment.
[6] NIST, AI Risk Management Framework 1.0, p. 18.
[7] Rebecca Kelly Slaughter, Algorithms and Economic Justice, 51.
[1] It is important to note that such harms are not limited to the United States. For example, in the Netherlands, a child care benefit scandal significantly harmed low income families and those of ethnic minorities for years due to unchecked bias in the AI algorithm.
[2] The U.S. National Telecommunications and Information Administration (NTIA) has opened a consultation on this topic that will surely further enrich this debate.
[3] More CIPL writing and research on organizational accountability may be found here.
[4] CIPL Recommendations for a Risk Based Approach to regulating AI, p. 6.
[5] See, for example, GDPR Articles 22 on Automated Decision Making, 24 on Controller Responsibilities, and 35 on Data Protection Impact Assessment.
[6] NIST, AI Risk Management Framework 1.0, p. 18.
[7] Rebecca Kelly Slaughter, Algorithms and Economic Justice, 51.