Since the second of February 2025, it has been illegal under European law for an AI system to infer the emotional state of a worker. Article 5(1)(f) of the EU AI Act prohibits, with limited exceptions, the use of AI to infer emotions "in the areas of workplace and education institutions." The legal reasoning is not complicated. Inferring emotional states from biometric signals, without the worker's meaningful consent, without transparency about what is being inferred or why, and without any right of challenge or correction, violates the foundational principle that individuals have rights over how they are evaluated and defined.
It is a coherent position. But it raises a question that the regulation declines to ask.
What happens when the inference is performed not by a machine, but by a manager?
The answer, in most organisations, is: nothing. The same inferential act, reaching a conclusion about a worker's engagement, loyalty, motivation, or emotional state based on signals the worker has not deliberately transmitted, carries no regulatory constraint when performed by a human. There is no transparency requirement. There is no right of challenge. The inference simply becomes the record.
The scale of this practice is not trivial. A peer-reviewed study published in 2025 surveying 515 front-line managers found that managers assign an average weight of 75 per cent to subjective information when evaluating employees. A substantial body of research has documented what investigators term the "idiosyncratic rater effect": between 58 and 72 per cent of the variance in an individual's performance rating reflects the characteristics of the person doing the rating, not the person being rated. Most of what a manager concludes about an employee, in other words, reflects the manager.
These are not incidental findings. They describe the normal operation of human inference in organisations. The manager observing body language in a meeting, reading disengagement into silence, inferring lack of commitment from a particular tone, is performing an act that European law has now deemed unacceptable when performed by software. The act is structurally identical. The accountability is entirely different.
The regulatory logic for prohibiting AI emotion inference rests on three properties. Such inference is not declared by the subject. It is not transparent to the subject. And it cannot be challenged by the subject. On each of these dimensions, human managerial inference performs at least as poorly as the AI systems now prohibited.
A manager's inference about an employee's emotional state or commitment is not declared to the subject. The employee typically does not know what conclusions have been reached, or on what basis. There is no audit trail. There is no mechanism by which the employee can examine the evidence used against them. And because transparency is absent, challenge is structurally impossible.
There is a further dimension. Cognitive science has established systematic biases in human judgment that compound this problem. Confirmation bias, affinity bias, and the well-documented tendency for personality-based feedback to fall disproportionately on women, 22 per cent more than on men according to 2022 research on performance management, mean that human inference is not simply unaccountable. It is systematically distorted in ways that produce discriminatory outcomes, even where the individual manager has no discriminatory intention.
Organisations have responded by developing competency frameworks, structured appraisal processes, and calibration exercises. These are useful. They are also, fundamentally, forms of administrative discretion. And administrative discretion, as the world's first legally binding AI treaty, ratified by the European Parliament on 11 March 2026, has explicitly declared, is insufficient as a governance standard for systems that affect fundamental rights.
The regulatory conversation about AI inference has correctly identified the mechanism of harm: an inference made without the subject's knowledge, consent, or right of challenge, used to make consequential decisions about that person's working life. What it has not yet confronted is that this mechanism does not require a machine.
The question worth sitting with is not whether AI should be prohibited from inferring the emotional states of workers. That question has been answered. The question is on what principled basis we treat the same act as acceptable when the inferrer is human, holds power over the inferred, and operates with no transparency, no audit, and no accountability to the person whose working life depends on the conclusion they reach.
Sources
- Article 5: Prohibited AI Practices — EU AI Act — EU AI Act, effective 2 February 2025
- What determines managers' use of subjective performance information? — Tandfonline, 2025
- Understanding the Latent Structure of Job Performance Ratings — Scullen, Mount & Goff, Journal of Applied Psychology, 2000 (idiosyncratic rater effect: 53–62% of rating variance reflects the rater, not the ratee)
- Job Performance Feedback Is Heavily Biased — Textio, 2022 (women receive 22% more personality-based feedback than men; analysis of 25,000 performance documents across 250+ organisations)
- The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS 225) — Council of Europe
- Texts adopted: Council of Europe Framework Convention on AI and Human Rights — 11 March 2026 — European Parliament, March 2026






