Centre for Information Policy Leadership
  • Home
  • About
    • CIPL Principals
    • Quarterly Reports
  • Membership
  • Events
    • Past Events
  • Projects
    • AI Project
    • Brazil AI Project
    • Organizational Accountability
    • Protecting Children's Data Privacy >
      • Policy Paper I: International Issues & Compliance Challenges
    • EU GDPR Implementation >
      • Global Readiness Benchmarks for GDPR
    • Enabling Data Driven Innovation and Big Data >
      • Privacy Risk Management
      • Transparency and User Controls
      • Updating Core Privacy Principles
    • Role of the DPO
    • Enabling Global Data Flows
    • Regional Focus and Outreach >
      • Effective LGPD
  • Resources
    • CIPL White Papers
    • Public Consultations
    • CIPL Articles
    • Hunton Andrews Kurth Privacy & Information Security Law Blog
  • CIPL Blog
  • Media
  • Contact Us
  • Home
  • About
    • CIPL Principals
    • Quarterly Reports
  • Membership
  • Events
    • Past Events
  • Projects
    • AI Project
    • Brazil AI Project
    • Organizational Accountability
    • Protecting Children's Data Privacy >
      • Policy Paper I: International Issues & Compliance Challenges
    • EU GDPR Implementation >
      • Global Readiness Benchmarks for GDPR
    • Enabling Data Driven Innovation and Big Data >
      • Privacy Risk Management
      • Transparency and User Controls
      • Updating Core Privacy Principles
    • Role of the DPO
    • Enabling Global Data Flows
    • Regional Focus and Outreach >
      • Effective LGPD
  • Resources
    • CIPL White Papers
    • Public Consultations
    • CIPL Articles
    • Hunton Andrews Kurth Privacy & Information Security Law Blog
  • CIPL Blog
  • Media
  • Contact Us

To Solve Cross-Border Data Flows We Need Pragmatic Solutions to Build Trust

3/22/2023

0 Comments

 
Cross-border data flows are vital to people in their everyday lives and to every aspect of the global economy and society, from commerce to communications, finance, health, research, and critical functions like cybersecurity and fraud prevention. The COVID-19 pandemic elevated data flows’ importance across domains central to our personal lives, including education, health care, and entertainment. 
 
Some studies suggest that data flows yield large economy-wide gains as measured by GDP, productivity, and other metrics. And yet, countries around the world have proposed policies and laws that would limit flows of data and/or require local data storage, degrading and sometimes impeding altogether services that depend on data flows to operate. If these policies are so harmful for the economic and societal progress, then why do countries continue to impose them?
 
Policymakers cite a variety of reasons for data localization, from concerns about foreign governments’ or malicious non-state actors’ ability to access their citizens’ personal data; to economic and commercial motivations; to a broader desire for digital sovereignty. Underlying many of these motivations is a “trust gap”: governments’ lack of trust that in a world of free data flows, their citizens’ privacy will be respected, national security will be protected, and economic gains will be commensurate with the value of the data.
 
There are strong arguments as to why data localization is ineffective for achieving these goals—indeed, CIPL will set out some of them in a series of forthcoming papers. But equally,  proponents of free data flows undermine their own case if they do not acknowledge that some circumstances demand pragmatic solutions that diverge from the just “Free and Open” ideal.
 
There is a pressing need to chart a pragmatic path between data localization and full data flow liberalization, to address specific concerns while enabling continued enjoyment of the broad benefits of data flows. These will be solutions tailored to respond to specific public policy concerns through a combination of new multilateral arrangements, oversight, accountability, data governance, and technologies such as encryption and privacy-enhancing technologies (the focus of another, ongoing CIPL project).
 
With respect to technological solutions, “sovereign cloud” offerings from several cloud service providers and TikTok’s Project Texas and Project Clover are examples in progress that we are tracking closely. At the same time, such solutions are not suited for all cases and must not detract from efforts to advance “Data Free Flow with Trust” (DFFT)—including the multilateral and multistakeholder diplomatic effort under that name introduced by Japan at the 2019 G20 Summit. 
 
We all – governments, global businesses, experts - must support and advance digital and data diplomacy efforts. Additional notable and promising initiatives include:
 
  • government-to-government multilateral trust-building efforts, such as the Global Cross-Border Privacy Rules Forum and the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities. Bilateral efforts, like the US-EU Trans-Atlantic Data Privacy Framework, will also remain important.
  • evolving existing data transfer mechanisms and fostering greater interoperability and mutual recognition between various data transfer mechanisms globally, such as Binding Corporate Rules (BCR) and Standard Contractual Clauses.
  • building bridges between the EU GDPR certifications and Global Cross Border Privacy Rules.
  • revisiting and streamlining cross border data flows national rules to achieve a more sustainable multilateral approach to data transfers and data more generally – what we might call a “New Deal for Data.”
​
Responding to specific concerns in a tailored manner today can complement and facilitate progress toward free flow over the longer term. It is the erosion of trust, above all, that imperils access to the data flow-based services upon which we all depend. To address this trust deficit, we must renew our focus on free data flows with trust, with creativity, pragmatism, and the continued, open minded dialogue of all stakeholders. CIPL looks forward to being on the frontiers of the new data and digital diplomacy.  
0 Comments

Age Assurance and Age Verification Tools: Takeaways from CIPL Roundtable

3/16/2023

0 Comments

 
On February 16, 2023, CIPL hosted a virtual roundtable with representatives from CIPL member companies, data protection authorities, civil society and experts to discuss the role of age assurance tools, their effectiveness, appropriateness, and their role in providing a safe online environment for minors.

The event was a part of the CIPL’s Children’s Data Privacy Project, which was launched in 2022 and was the first in a series of “deep dive” roundtables to be held in 2023. Each roundtable will explore existing best practices and emerging options that address the key  compliance issues and challenges identified in CIPL’s policy paper "Protecting Children's Data Privacy, International Issues and Compliance Challenges," published in October 2022. Future roundtables will address the risk-based approach to the protection of children online, transparency, consent and other legal grounds for processing, as well as personalization.

The purpose of the age assurance roundtable was to gather perspectives from participants on the methodologies and emerging best practices when confirming whether a user is a minor, so they are appropriately shielded from harmful or inappropriate content, and can thrive in a digital eco-system with age-appropriate content.

Legal Background

Global policy, legislative, and regulatory initiatives to protect children online increasingly require or expect providers of digital services to verify, or at least assess the age of their users.

Data protection legislation often imposes strict requirements regarding the processing of children’s data (e.g., US COPPA, EU or UK GDPR). Related regulatory frameworks may sometimes specify additional safeguards (e.g., UK Age Appropriate Design Code, Irish Fundamentals, California Age-Appropriate Design Code Act).

Newer regulatory initiatives are requiring digital services to adopt measures that protect children from content, services, and products inappropriate for their age, and ensure access to safe and appropriate online experiences (e.g., EU Digital Services Act, EU Audiovisual Media Services Directive, EU CSAM Regulation proposal, UK Online Safety Bill, EU strategy for a better internet for kids (BIK+)).

Key Takeaways from the Roundtable Discussions.

1. The methodology of age assurance and the timing of deployment depend on the nature of the service and its level and likelihood of risk to children.

Different types of online services and platforms present different levels of risk (and benefits) to children and youths. Platforms range from adult-only to purposefully child-centric. The actual services on a platforms vary considerably. Some may allow public or private interactions and messaging. Others restrict or layer access to various functionalities. The features and designs specific to a particular service and the level potential risks to children, including the likelihood and severity of such risks, will determine whether age assurance is required, which methodology is most appropriate, and how and when it should be deployed.

Most importantly, the chosen methodology should take the best interests of the child into consideration. Age assurance and privacy compliance must lead to the protection of children and minors in the digital environment, not from it.
 
2. There is no silver bullet. No methodology is better than another, but one could be more appropriate and effective for the specific use case.

Several age assurance methodologies are currently available to organizations (e.g., self-declaration models, AI-powered age-estimation approaches, biometrics-based tools, third-party provider services), and many others are in development (e.g., through standards). Each methodology presents different levels of accuracy, and each has unique strengths and weaknesses. Some are more privacy protective and others require collection of more information for the specific age verification or assurance purpose.  There is no one-size-fits-all.

The utility and suitability of different age verification or assurance methodologies depend on the risk context of the underlying service(s), or how and on what type of device the service is likely accessed. Also, services providing layered functionalities might require layered age assurance (i.e., age assurance requested at different access points) and/or the use of multiple methodologies at different stages.

Choosing a specific methodology requires an assessment of the risks and benefits of different methods and their: proportionality: is the impact of using a given methodology proportionate to the level of harm that is being addressed or avoided by the use of given methodology. Choosing an age verification tool where age estimation would suffice might require disproportionate collection of personal data. Identifying the most appropriate method means balancing its effectiveness with privacy protections. 
.
Regulatory expectations must also take into account the practical technical feasibility of different methods and their impact on user-experience (e.g. how seamlessly a method can be integrated in the user journey, whether it would require more than once devise).
 
3. Organizations need guidance on adequate age assurance criteria and risk taxonomy to perform proper risk assessments.

Existing regulatory guidelines, such as the ICO Age Appropriate Design Code, the Irish Fundamentals for a Child-Oriented Approach to Data Processing, the California Age-Appropriate Design Code Act, and the CNIL’s 8 recommendations to enhance the protection of children online are helpful to organizations, as well as the regulatory engagement and readiness to provide further tools and guidance.

However, different national norms and cultural contexts create diverging and occasionally conflicting requirements, exacerbating compliance issues for organizations operating globally. For example, the use of biometric for age assurance may create risks of non-compliance with national or state laws in some jurisdictions, but may be embraced in others. Equally, carrying out appropriate risk-assessments is still a challenging endeavor for companies, with many testing and working with purpose on developing best practices and methodologies for age verification and assurance. At the same time, regulators are keen to understand and see the industry response and progress.

research has been conducted into creating a spectrum of activities and typical environment that may create harms for children. However, there is no complete convergence, nor consensus on the granularity of “risky” services, specific use cases and taxonomy of risks or harms, nor on how to conduct acceptable risk assessments . Appropriate risk assessments must focus on the risk to the child or minor and must go beyond data protection and take the best interest of the child into consideration, including empowering children in their online experiences.

Regulators equally stress the need for organizations to conduct and document a full and comprehensive Data Protection Impact Assessments (DPIAs) when processing children’s personal data and to be able to share such an assessment with regulators on request. Such DPIAs must contain a risk assessment of risks of harms to children and teens, as opposed to only risks and harms to organizations. Finally, DPIAs must document how organizations performed any balancing of different rights, risks, including any trade-offs.

There are still unanswered questions, such as how to operationalize those concepts through a repeatable and systematic process, or how to account for the changing developmental stages of children and teens when designing products and services and developing risk frameworks.

There is a real need for regulatory convergence and co-ordination. Initiatives such as the UK Digital Regulatory Enforcement Forum, the collaboration between the UK ICO and Ofcom on the children’s safety and data protection and Global Online Safety Regulators Network have been mentioned as good examples of fora that enable coordinated and cross-regulatory discussions.

4. To be effective, design and deployment of, age assurance tool should continuously research and consider children’s behavior and motivation.

Research shows that children may lie about their age to access online services. This behavior can be attributable to a number of factors, e.g., confusing information from service providers regarding access, confusing guidance from parents, and simplistic age declaration queries, or just simple curiosity and determination to experiment and take part in online world. Even though children and teens have a good understanding of online harms, they tend to be highly motivated not to be  excluded, restricted, or relegated to a lesser version of the service they are seeking to access and in more charge of their choices.

Understanding children’s motivation and drawing from parallel age assurance scenarios in the analogue world can support better design, transparency and trust and ultimately lead to a more successful and appropriate approach to age assurance.
 
5. Age assurance is only one of the tools available to keep children safe online; it cannot be used in isolation.

Organizations cannot rely on age assurance alone. Keeping children safe online and protecting their data protection and other rights will require a combination of measures to ensure compliance with various data protection and other legal requirements and regulatory guidance, such as privacy and safety by design and default, appropriate user-centric transparency, content moderation and personalization of content, parental consent for certain ages and family specific controls, and age-appropriate services (or child friendly spaces within services).

The more organizations comply with the granular requirements and standards of the UK Age Appropriate Design Code, or Irish Fundamentals, or California Code or France’s CNIL guidance, the less often they will be required to resort to age verification and assurance as the ultimate protective measure from data protection point law of view. 
 
6. Constructive engagement and sharing of information is essential for development of bottom up standards and certifications for age verification and assurance 

Many organizations, especially the larger ones,  have made serious investments and commitments to develop best practices for age verification and assurance. They are testing available age assurance methods and exploring new solutions, for instance through participatory design testing. As research and with it our understanding evolves, it is imperative that all stakeholders (industry, policymakers, regulators, civil society) continue engaging and sharing in ongoing dialogue regarding expectations and progress.

There will likely be a need for further developments of standards and certifications, that are accepted throughout multiple jurisdictions and by multiple organizations. These can only be developed in a collaborative way, with the participation of all stakeholders. This will be necessary to  ensure a more ready and systematic adoption of appropriate tools and techniques, and ultimately ensure greater protection for children and youths online.
0 Comments
<<Previous

    Archives

    March 2023
    February 2023
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2020
    June 2020
    April 2020
    March 2020
    December 2019

    Categories

    All
    Accountability
    Data Processing
    Individual Rights
    Legitimate Interest
    Transparency
    US Privacy

    RSS Feed

Copyright © 2022 by the Centre for Information Policy Leadership at Hunton Andrews Kurth LLP.
Disclaimer | Privacy Policy | Cookies Policy | Contact
Picture
Picture