By Dr. Alexander Dix
Vice-Chair of the European Academy for Freedom of Information and Data Protection
Former Data Protection and Freedom of Information Commissioner in Brandenburg (1998-2005) and Berlin (2005-2016)
Any views expressed herein are not necessarily the views of CIPL nor Hunton Andrews Kurth LLP
Vice-Chair of the European Academy for Freedom of Information and Data Protection
Former Data Protection and Freedom of Information Commissioner in Brandenburg (1998-2005) and Berlin (2005-2016)
Any views expressed herein are not necessarily the views of CIPL nor Hunton Andrews Kurth LLP
Sir Bruce Slane, New Zealand’s first Privacy Commissioner, at the end of his term of office in 2003 highlighted the importance of privacy by drawing a parallel to the air we breathe: both are invisible and we only notice it once it’s gone. So when Scott McNealy said in 1999: “You have zero privacy anyway, get over it” he might as well have said: “You don’t have any air to breathe, so just stop breathing.” Twenty years later, Mark Zuckerberg uttered: “The future is private” (after taking the opposite view all along since setting up Facebook).
So are we witnessing a positive climate change with regard to privacy and data protection? Doubtless, there are some positive developments. Since the GDPR entered into force 2018, 17 more countries have adopted privacy laws and 163 more privacy tech vendors have started their businesses. However, some signs point in the opposite direction. Our informational and technological environment is changing at what the European Commission once aptly described “breakneck speed”. Some have described this development as causing a “data tsunami”.
Richard Thomas, then UK Information Commissioner, at the 2010 International Conference of Data Protection and Privacy Commissioners in Jerusalem, made a remarkable intervention during a panel on biometrics. At that time facial recognition technology started to become available, but Google had declared publicly that they would refrain from deploying facial recognition technology for ethical reasons. Thomas was skeptical as to how long this position would be upheld and urged the Commissioners to raise public awareness by pointing to the risks for women in particular if one day facial recognition would be rolled out for commercial use. Women being the majority of data subjects would form a strong opposition against the use of this technology that facilitates stalking (to name but one negative consequence).
More recently, while Microsoft and IBM have stopped the sale of facial recognition software to police departments in the US due to privacy concerns and the discriminatory effect of the software, and Amazon has extended its moratorium on police use of facial recognition “until further notice”, two smaller companies are making the headlines. ClearView AI is offering the technology to local authorities and law enforcement agencies and this offer is being taken up by a number of US cities and the US Postal Service. On the other hand, half a dozen US states and several other cities have banned the use of the technology. The company PimEyes, initially launched by a Polish start-up, is offering facial recognition not only to public bodies but also the general public. PimEyes markets its service with a privacy spin: everyone should have the right – so they say – to find out where their photos have been posted online and get alerts when they are posted. PimEyes has become wildly popular among stalkers. Its general use would in effect be the end of anonymity and free movement in public places. As Stephanie Hare has rightly observed, while technical gadgets such as smartphones can be switched off or left at home, faces cannot. ClearView AI and PimEyes are now under investigation in Europe as to whether they comply with the provisions of the GDPR.
But the wider picture of our global informational ecosystem must include China where the government has declared it wants the country to be a world leader in artificial intelligence by 2030. Facial recognition is considered to be a helpful tool to identify people in mass gatherings in case there is “a major accident”. The determination of what constitutes a major accident can be illustrated by recent events in Hong Kong. The official values and policy of the Chinese government are diametrically opposite to what western democracies stand for. And still, even in the United States some Capitol rioters in January 2021 have been identified by private “sedition hunters” using PimEyes (which is currently legal in the U.S.).
The question we have to decide is: which kind of society we want to live in? Technology is man-made, not a natural disaster like a tsunami (although some of these “natural” disasters are themselves man-made as a result of climate-change). Technology can and should be regulated. However, regulation very often comes too late to influence design decisions, business models and infrastructures. Regulation is necessary, but not sufficient.
What is needed, in addition, is twofold: raising the awareness of young people (even at kindergarten-level) for the fundamental importance of being let alone if they want to. And secondly a code of ethics for IT-engineers similar to the Hippocratic Oath for doctors is needed, supervised and enforced by professional bodies. Not every technology which is legal is ethically and socially acceptable. After all, the defense of privacy is too important to be left to Data Protection and Privacy Commissioners alone.
So are we witnessing a positive climate change with regard to privacy and data protection? Doubtless, there are some positive developments. Since the GDPR entered into force 2018, 17 more countries have adopted privacy laws and 163 more privacy tech vendors have started their businesses. However, some signs point in the opposite direction. Our informational and technological environment is changing at what the European Commission once aptly described “breakneck speed”. Some have described this development as causing a “data tsunami”.
Richard Thomas, then UK Information Commissioner, at the 2010 International Conference of Data Protection and Privacy Commissioners in Jerusalem, made a remarkable intervention during a panel on biometrics. At that time facial recognition technology started to become available, but Google had declared publicly that they would refrain from deploying facial recognition technology for ethical reasons. Thomas was skeptical as to how long this position would be upheld and urged the Commissioners to raise public awareness by pointing to the risks for women in particular if one day facial recognition would be rolled out for commercial use. Women being the majority of data subjects would form a strong opposition against the use of this technology that facilitates stalking (to name but one negative consequence).
More recently, while Microsoft and IBM have stopped the sale of facial recognition software to police departments in the US due to privacy concerns and the discriminatory effect of the software, and Amazon has extended its moratorium on police use of facial recognition “until further notice”, two smaller companies are making the headlines. ClearView AI is offering the technology to local authorities and law enforcement agencies and this offer is being taken up by a number of US cities and the US Postal Service. On the other hand, half a dozen US states and several other cities have banned the use of the technology. The company PimEyes, initially launched by a Polish start-up, is offering facial recognition not only to public bodies but also the general public. PimEyes markets its service with a privacy spin: everyone should have the right – so they say – to find out where their photos have been posted online and get alerts when they are posted. PimEyes has become wildly popular among stalkers. Its general use would in effect be the end of anonymity and free movement in public places. As Stephanie Hare has rightly observed, while technical gadgets such as smartphones can be switched off or left at home, faces cannot. ClearView AI and PimEyes are now under investigation in Europe as to whether they comply with the provisions of the GDPR.
But the wider picture of our global informational ecosystem must include China where the government has declared it wants the country to be a world leader in artificial intelligence by 2030. Facial recognition is considered to be a helpful tool to identify people in mass gatherings in case there is “a major accident”. The determination of what constitutes a major accident can be illustrated by recent events in Hong Kong. The official values and policy of the Chinese government are diametrically opposite to what western democracies stand for. And still, even in the United States some Capitol rioters in January 2021 have been identified by private “sedition hunters” using PimEyes (which is currently legal in the U.S.).
The question we have to decide is: which kind of society we want to live in? Technology is man-made, not a natural disaster like a tsunami (although some of these “natural” disasters are themselves man-made as a result of climate-change). Technology can and should be regulated. However, regulation very often comes too late to influence design decisions, business models and infrastructures. Regulation is necessary, but not sufficient.
What is needed, in addition, is twofold: raising the awareness of young people (even at kindergarten-level) for the fundamental importance of being let alone if they want to. And secondly a code of ethics for IT-engineers similar to the Hippocratic Oath for doctors is needed, supervised and enforced by professional bodies. Not every technology which is legal is ethically and socially acceptable. After all, the defense of privacy is too important to be left to Data Protection and Privacy Commissioners alone.