PROVISION 1 ¡ Art. 5(1)(a)
Does the system use subliminal techniques (beyond conscious awareness) or purposefully manipulative or deceptive techniques?
Examples: hidden audio or visual cues, dark patterns, AI personas designed to deceive users into specific behaviour.
Does it materially distort the behaviour or decisions of affected persons?
Could it cause significant harm (physical, psychological, or financial)?
PROVISION 2 ¡ Art. 5(1)(b)
Does the system target or use information about vulnerabilities related to age, disability, or socio-economic situation?
Examples: ads targeting children, systems that exploit a known disability, tools that single out low-income users for predatory treatment.
Does it use those vulnerabilities to distort behaviour?
Could it cause significant harm?
PROVISION 3 ¡ Art. 5(1)(c)
Does the system score, classify, or evaluate people based on their social behaviour or personal characteristics?
Examples: systems that rate citizens' trustworthiness, classify people for broad treatment based on personality traits, or assess "worth" across domains.
Is the score used in contexts unrelated to where the data was originally collected?
For example: financial behaviour used to decide school admission, or online activity used for employment screening.
Does the resulting treatment cause detriment that is unjustified or disproportionate?
PROVISION 4 ¡ Art. 5(1)(d)
Does the system predict the risk that a specific individual will commit a criminal offence?
Risk assessments based on objective, verifiable facts directly linked to existing criminal activity (such as supporting an investigation) are not covered by this prohibition.
Is the prediction based solely on profiling or assessing personality traits, without verifiable objective facts directly linked to criminal activity?
PROVISION 5 ¡ Art. 5(1)(e)
Does the system create or expand facial recognition databases by scraping facial images from the internet or CCTV footage in an untargeted manner?
This is a flat prohibition with no exceptions.
PROVISION 6 ¡ Art. 5(1)(f)
Does the system infer emotions of people in workplace or educational institution settings?
Examples: monitoring employee frustration, analysing student engagement through facial expressions.
Is the system used for medical or safety reasons?
Examples: detecting driver fatigue, monitoring patient distress in a medical setting.
PROVISION 7 ¡ Art. 5(1)(g)
Does the system categorise people based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation?
Is this for labelling or filtering of lawfully acquired biometric datasets in a law enforcement context?
The law enforcement exception is narrow and subject to strict safeguards.
PROVISION 8 ¡ Art. 5(1)(h)
Does the system perform real-time remote biometric identification (e.g. live facial recognition) in publicly accessible spaces?
Is it used by law enforcement for a narrow exception (searching for missing persons, preventing a terrorist attack, or identifying suspects in serious crimes)?
Is the use covered by judicial or administrative authorisation AND a Fundamental Rights Impact Assessment (FRIA)?