Consider what happens when a single AI deployment triggers obligations under three separate legal frameworks simultaneously. An emotion recognition system used in a workplace generates requirements under the GDPR as a processor of biometric data, obligations under the EU AI Act for emotion inference in professional settings, and potentially under the Digital Services Act for platforms operating at scale. Three frameworks. One system. Compounding obligations that each require separate analysis.
Analysis from the Future of Privacy Forum sets out how this interplay works in practice, and what it demands of organisations deploying AI that touches personal, behavioural, or emotional data.
Three Frameworks, Overlapping Scope
The GDPR governs how personal data is collected, processed, stored, and used across the EU. It applies to emotional data, biometric data, and inferred personal characteristics as categories requiring heightened protection or explicit consent. GDPR does not specifically address AI: it was drafted before modern AI deployment was anticipated.
The EU AI Act layers AI-specific requirements and prohibitions on top of GDPR's existing framework. Where the GDPR asks "is this data being processed lawfully?", the AI Act asks "is this AI system deploying a prohibited practice?" Both questions may apply to the same system simultaneously. Compliance with one does not guarantee compliance with the other.
The Digital Services Act introduces obligations for very large online platforms regarding algorithmic transparency, advertising, and the protection of minors. For platforms deploying recommendation algorithms or emotional profiling at scale, the DSA adds a third compliance layer.
The Compounding Effect
The FPF analysis identifies a specific tension: the AI Act's prohibitions apply to certain uses of AI, while the GDPR applies to certain data processing activities. A system could theoretically comply with AI Act prohibitions while still violating GDPR, processing emotional data without adequate consent or transparency. The reverse is also possible.
For organisations that have accumulated multiple AI-enabled HR, marketing, or customer-experience tools over several years without a unified framework, the combined obligations of all three instruments create significant compliance complexity. The question of which framework governs a given practice cannot always be answered by reading any single instrument. It requires reading all three in parallel.
Where the Red Lines Fall in Practice
The FPF paper's central observation is that neither regulation is a substitute for the other. An organisation must assess each framework independently and address both simultaneously. What looks like a solved compliance problem under one instrument may remain an open question under another. For organisations relying on a single legal review to cover their AI deployments, this represents a structural gap in their approach.
HumanSafe Opinion
The following reflects HumanSafe Intelligence's position on this development.
Three overlapping frameworks converging on the same problem is a signal worth reading carefully. The GDPR was drafted to constrain data processing. The AI Act was drafted to constrain AI deployments. The DSA was drafted to constrain platform behaviour. Each addresses the issue from a different angle, but the underlying architecture all three are trying to constrain is the same: systems that derive value from processing human emotional and behavioural signals without constitutional consent.
The compliance complexity is not a property of regulation. It is a property of inference-based systems operating in a rights-sensitive domain. When the architecture is constitutionally constrained from the outset, when the processing that triggers all three frameworks simply does not occur, compliance ceases to be a parallel workstream and becomes a design property. That is what rights-first architecture looks like from the inside.
Sources
- Red Lines under the EU AI Act: Understanding 'Prohibited AI Practices' and their Interplay with the GDPR, DSA — Future of Privacy Forum, 2025
- EU AI Act | Shaping Europe's digital future — European Commission
- General Data Protection Regulation – Legal Text — GDPR.eu






