The European Commission’s proposed Digital Fairness Act (DFA) intends to address legitimate concerns about certain online practices, such as addictive design and the use of dark patterns in digital environments. Particular emphasis is placed on strengthening protections for “vulnerable groups” in the context of digital advertising and profiling more generally. At its core lies the ambition to ensure that children and other individuals deemed more susceptible to certain commercial practices have heightened safeguards in an increasingly complex online environment. That objective is framed as a forward-looking response to risks in digital markets that the Commission believes warrant regulatory action.[1]
Yet the emerging contours of the proposal suggest a critical conceptual misstep: the assumption that stronger protection for “vulnerable groups” will follow naturally from introducing a new, normatively open-ended concept of “vulnerability.” It is not evident, however, that definitional expansion alone will produce more effective safeguards, particularly given that the existing body of law already offers protection based on the vulnerability and sensitivity of information. Instead, a more durable and operationally sound approach lies in defining, with greater precision, the unfair or manipulative practices that cause harm, and in ensuring their coherent enforcement across the existing EU framework.
The GDPR already provides a comprehensive framework for protecting Europeans’ personal data. Introducing parallel requirements risks fragmenting these protections and creating tensions that may ultimately undermine user privacy.
More fragmentation at a moment that calls for simplification
A central policy objective of the proposed DFA is to address a perceived “lack of digital fairness for consumers, including vulnerable consumers such as minors,” particularly where such vulnerabilities lead to sub-optimal choices, financial detriment, adverse health effects, or other indirect harms.[2]
In reality, however, the EU already has a mature framework based on (i) legally relevant characteristics of vulnerability or sensitivity and (ii) restrictions on harmful commercial practices based on these existing categories.[3]
The GDPR builds heightened protection for children and for particularly sensitive data directly into its architecture. GDPR Recital 38 makes clear that children merit “specific protection” because they may be less aware of the risks and consequences, and Article 8 establishes more stringent consent requirements for information society services offered directly to them. Beyond that, principles such as fairness and transparency, set in Article 5, the obligation to provide child-friendly information in Recital 58, and the explicit right to erasure in circumstances particularly relevant to minors in Recital 65 collectively reflect child-centric safeguards embedded throughout the regime.

Moreover, Article 9 GDPR imposes a general prohibition on processing “special categories of personal data” because of their intrinsic sensitivity[4], including, specifically, data revealing health or sexual orientation, subject only to narrowly defined exceptions. The Court of Justice of the European Union (CJEU) has strengthened the protections by progressively expanding the scope of “special categories” of data to include not only explicitly declared attributes, but also data from which sensitive characteristics can be inferred with a sufficient degree of likelihood.[5]
At the same time, the Digital Services Act (DSA) ensures the protection of such data in a commercial context by expressly prohibiting its use for targeted advertising, irrespective of any purported legal basis under Article 26(3), 27, or 28(2).
Any legislative initiative that seeks to protect consumers on the basis of categories of vulnerability, such as age or health status, risks colliding with robust safeguards already enshrined in the GDPR and the DSA, creating regulatory overlap, fragmentation and legal uncertainty that run counter to the Commission’s own simplification agenda.
A Question of Fairness
Beyond the overlap with existing safeguards for vulnerable groups and special categories of personal data under the GDPR and the DSA, a legislative proposal built around broad notions of consumer “vulnerability” also risks encroaching on other fundamental GDPR principles, including data minimisation and fairness, potentially undermining the very framework designed to protect individuals’ rights. The fairness principle in particular ensures that the individual is not unjustifiably harmed or disadvantaged by the processing of their personal data.
The concept of vulnerability is inherently dynamic and highly subjective, and it will almost inevitably fluctuate over time. Advertisements for certain products or services may be problematic only for individuals battling addiction; health conditions may improve or deteriorate, and financial or social circumstances may change over time.
In practice, such a dynamic and inherently vague concept of vulnerability would be difficult to operationalise. In an attempt to avoid exposing individuals to “vulnerability content’, organisations might be incentivised to collect, infer, or otherwise process additional data points of a potentially sensitive nature on an ongoing basis. This may include not just age or health status but also socio-economic indicators, behavioural patterns, or psychological traits, to determine who qualifies as “vulnerable,” in which context, and at what moment in time.
Demonstrating that information potentially fitting into any of the dynamic and subjective “vulnerable group” categories has not been inadvertently deployed for commercial purposes in the past. This may also disincentivise embedding PETs into processing activities, which would ordinarily serve as tools to operationalise GDPR principles such as purpose limitation and data minimisation, but would also limit the ability to check against “vulnerable group” qualities.[6] Individuals could be subjected to more invasive profiling and sensitive inferences they neither consented to nor could reasonably expect, inadvertently creating potentially unfair data practices rather than improving digital fairness.
Define and prohibit unfair practices
Protecting vulnerable groups from unfair digital practices is essential to sustaining trust in the digital ecosystem. Achieving this requires coherent legal frameworks with clear obligations and enforceable rights that build on, rather than duplicate, existing protections. European consumers are already “among the most protected in the world”[7], and any new regulatory initiative should be grounded in a robust impact assessment that fully accounts for the breadth of the existing framework. Rather than introducing new unbounded concepts, the Commission should focus squarely on addressing the conduct of unfair commercial practices themselves, ensuring that enforcement targets demonstrable harm.
Where necessary, a targeted, conduct-based approach would strengthen consumer protection while preserving legal certainty, coherence, and the integrity of the EU’s existing regulatory framework.
In the context of the DFA, CIPL recommends:
- Clarify “unfair practices” in digital contexts, e.g., manipulative interfaces (dark patterns), design features that materially impair autonomous decision-making, coercive consent flows, exploitative engagement loops, or personalisation practices that are deceptive or that intentionally bypass user intent.
- Leverage the existing protections for groups already protected by law (children, users linked to sensitive characteristics, users subject to heightened risks from profiling), without creating a parallel and legally elusive “vulnerability” classification layer.
[1] The European Commission’s Digital Fairness Fitness Check posits that “37% of consumers had the impression [emphasis added] that the company had knowledge about their vulnerabilities and used it for commercial purposes”, European Commission, document accompanying Call for Evidence on the Digital Fairness Act, available at Initiative details.
[2] European Commission, document accompanying Call for Evidence on the Digital Fairness Act, available at Initiative details.
[3] Centre for Information Policy Leadership, Submission to the Call for Evidence on the Digital Fairness Act, available at https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14622-Digital-Fairness-Act/F33096920_en.
[4] Recital 51 GDPR
[5]Case C-184/20, OT v Vyriausioji tarnybinės etikos komisija, available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:62020CJ0184.
[6] See CIPL Understanding the Role of PETs in the Digital Age https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl-understanding-pets-and-ppts-dec2023.pdf
[7] DFA Call for evidence, Digital Fairness Act.