By Dennis Hirsch
Professor of Law
The Ohio State University Moritz College of Law and Capital University Law School
Any views expressed herein are not necessarily the views of CIPL nor Hunton Andrews Kurth LLP
Professor of Law
The Ohio State University Moritz College of Law and Capital University Law School
Any views expressed herein are not necessarily the views of CIPL nor Hunton Andrews Kurth LLP
If you are a sentient being and are at all involved in the field of privacy, you have no doubt heard about “data ethics.” You may also have wondered: “what does this term really mean” and “how do organizations achieve it on the ground?” I had these questions and they led me, several years ago, to initiate a research study of what companies do when they pursue data ethics. I was lucky enough to convince Ohio State colleagues from the fields of business, computer science, philosophy and sociology to join me in this exploration. Combined with my own field of law, this multi-disciplinary team was able to examine data ethics from multiple perspectives. We interviewed twenty-three leading practitioners of data ethics and surveyed fifty more. Here is some of what we learned.
First, the field of data ethics in its recent incarnation is a response to the growing use of advanced analytics and artificial intelligence, and the risks that these technologies pose to individuals and the broader society. These risks include not only violations of privacy but also bias, manipulation, opaque “black box” decision-making, entrenchment of inequality, undermining of democracy and other potential harms.
In the past, organizations whose use of data posed risks to others sought to address this by complying with privacy laws and the fair information practices (FIPs). But the companies we talked to explained that, when it came to advanced analytics and AI, this strategy was woefully insufficient. For one thing, advanced analytics and AI pose risks that go well beyond privacy to bias, manipulation, etc. For another, privacy law’s central tools – notice, consent and purpose limitation – don’t protect people who cannot understand what data scientists can infer from their data, and so cannot make meaningful choices about whether to share this data in the first place. Privacy law remains vital, even essential. However, it does not protect people from the harms that advanced analytics and AI can create.
To reduce these risks, the companies said that they had to go beyond privacy law and the FIPs. They had to try and spot bias and fix it; distinguish between persuasion and exploitation, and stop themselves from engaging in the latter; find ways to make their algorithms more explainable and less opaque; and to take other such steps that privacy law does not require. As they put it, they had to go beyond the law and into the realm of “ethics.” For them “data ethics” does not mean aligning their operations with one ethical philosophy or another. It means going beyond legal requirements to reduce risk to individuals and the broader society.
This formulation demystifies data ethics. It takes it out of the realm of philosophy and puts it squarely into that of beyond compliance business behavior. Thoughtful companies know how to go beyond compliance. They have done it in the environmental and worker safety areas. Now they need to do it for advanced analytics and AI.
This begs the question of why a company would do more than what the law requires it to do. Here, too, the answers are not mysterious and hew closely to the reasons why companies pursue self-regulation generally. Some do it because a founder or CEO instilled core values that the company wants to honor. But most do it because they believe it enhances their competitiveness and bottom line. Companies pursue data ethics to avoid incidents like the Facebook-Cambridge Analytica scandal, preserve their reputations and sustain their trusted relationships. They seek to achieve data ethics because they know that regulation of advanced analytics and AI is coming. The European Commission’s recent proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence, and the FTC’s recent blog post on Aiming for Truth, Fairness and Equity in Your Company’s Use of AI, demonstrate this. Companies want to get ahead of, and perhaps shape, this coming regulation. They practice data ethics so that they can better recruit and retain the young data scientists that are key to their business success and that want to work with companies whose values they share.
How do companies achieve data ethics? This brief essay cannot describe our findings on this topic. For that, I refer you to our research team’s report: “Business Data Ethics: Emerging Trends in the Governance of Advanced Analytics and AI.” I hope that you find the report to be useful and would welcome your feedback on it. Contact me at [email protected].
First, the field of data ethics in its recent incarnation is a response to the growing use of advanced analytics and artificial intelligence, and the risks that these technologies pose to individuals and the broader society. These risks include not only violations of privacy but also bias, manipulation, opaque “black box” decision-making, entrenchment of inequality, undermining of democracy and other potential harms.
In the past, organizations whose use of data posed risks to others sought to address this by complying with privacy laws and the fair information practices (FIPs). But the companies we talked to explained that, when it came to advanced analytics and AI, this strategy was woefully insufficient. For one thing, advanced analytics and AI pose risks that go well beyond privacy to bias, manipulation, etc. For another, privacy law’s central tools – notice, consent and purpose limitation – don’t protect people who cannot understand what data scientists can infer from their data, and so cannot make meaningful choices about whether to share this data in the first place. Privacy law remains vital, even essential. However, it does not protect people from the harms that advanced analytics and AI can create.
To reduce these risks, the companies said that they had to go beyond privacy law and the FIPs. They had to try and spot bias and fix it; distinguish between persuasion and exploitation, and stop themselves from engaging in the latter; find ways to make their algorithms more explainable and less opaque; and to take other such steps that privacy law does not require. As they put it, they had to go beyond the law and into the realm of “ethics.” For them “data ethics” does not mean aligning their operations with one ethical philosophy or another. It means going beyond legal requirements to reduce risk to individuals and the broader society.
This formulation demystifies data ethics. It takes it out of the realm of philosophy and puts it squarely into that of beyond compliance business behavior. Thoughtful companies know how to go beyond compliance. They have done it in the environmental and worker safety areas. Now they need to do it for advanced analytics and AI.
This begs the question of why a company would do more than what the law requires it to do. Here, too, the answers are not mysterious and hew closely to the reasons why companies pursue self-regulation generally. Some do it because a founder or CEO instilled core values that the company wants to honor. But most do it because they believe it enhances their competitiveness and bottom line. Companies pursue data ethics to avoid incidents like the Facebook-Cambridge Analytica scandal, preserve their reputations and sustain their trusted relationships. They seek to achieve data ethics because they know that regulation of advanced analytics and AI is coming. The European Commission’s recent proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence, and the FTC’s recent blog post on Aiming for Truth, Fairness and Equity in Your Company’s Use of AI, demonstrate this. Companies want to get ahead of, and perhaps shape, this coming regulation. They practice data ethics so that they can better recruit and retain the young data scientists that are key to their business success and that want to work with companies whose values they share.
How do companies achieve data ethics? This brief essay cannot describe our findings on this topic. For that, I refer you to our research team’s report: “Business Data Ethics: Emerging Trends in the Governance of Advanced Analytics and AI.” I hope that you find the report to be useful and would welcome your feedback on it. Contact me at [email protected].