🤖 AI QUICK CHECK

EU AI Act — Article 5 Self-Check

Walk through the 8 prohibited AI practices under Article 5 in plain English. Free, 5 minutes, informational only.

All 8 prohibitions
Legal qualifiers
Not legal advice
Last verified: 12 April 2026. Article 5 of the EU AI Act became enforceable on 2 February 2025. Penalties for prohibited practices: up to EUR 35,000,000 or 7% of global annual turnover (Art. 99(3)). The European Commission's February 2025 Guidelines on prohibited AI practices inform this self-check. This tool is informational only and does not constitute legal advice. Source: Regulation (EU) 2024/1689, Article 5 — EUR-Lex.
🤖
Free ¡ EU AI Act

AI Quick Check

A structured self-check against the 8 prohibited AI practices in Article 5 of the EU AI Act. Each provision includes the legal qualifiers (not a single yes/no) so the outcome reflects the actual law, not a false alarm.

🔒 No data stored. Your answers are processed in your browser only. Nothing is sent to or stored on our servers.
â„šī¸ This is a self-assessment, not a legal verdict. Companies typically engage qualified counsel before acting on results.

Free account required

Article 5 — 8 prohibited practices

Answer each question for your AI system

Where the law requires multiple conditions (such as "material distortion" or "significant harm"), we only show those follow-up questions if you answered yes to the trigger.

We include this in the report so you can tell multiple checks apart. Not stored anywhere.
PROVISION 1 ¡ Art. 5(1)(a)
Does the system use subliminal techniques (beyond conscious awareness) or purposefully manipulative or deceptive techniques?
Examples: hidden audio or visual cues, dark patterns, AI personas designed to deceive users into specific behaviour.
Does it materially distort the behaviour or decisions of affected persons?
Could it cause significant harm (physical, psychological, or financial)?
PROVISION 2 ¡ Art. 5(1)(b)
Does the system target or use information about vulnerabilities related to age, disability, or socio-economic situation?
Examples: ads targeting children, systems that exploit a known disability, tools that single out low-income users for predatory treatment.
Does it use those vulnerabilities to distort behaviour?
Could it cause significant harm?
PROVISION 3 ¡ Art. 5(1)(c)
Does the system score, classify, or evaluate people based on their social behaviour or personal characteristics?
Examples: systems that rate citizens' trustworthiness, classify people for broad treatment based on personality traits, or assess "worth" across domains.
Is the score used in contexts unrelated to where the data was originally collected?
For example: financial behaviour used to decide school admission, or online activity used for employment screening.
Does the resulting treatment cause detriment that is unjustified or disproportionate?
PROVISION 4 ¡ Art. 5(1)(d)
Does the system predict the risk that a specific individual will commit a criminal offence?
Risk assessments based on objective, verifiable facts directly linked to existing criminal activity (such as supporting an investigation) are not covered by this prohibition.
Is the prediction based solely on profiling or assessing personality traits, without verifiable objective facts directly linked to criminal activity?
PROVISION 5 ¡ Art. 5(1)(e)
Does the system create or expand facial recognition databases by scraping facial images from the internet or CCTV footage in an untargeted manner?
This is a flat prohibition with no exceptions.
PROVISION 6 ¡ Art. 5(1)(f)
Does the system infer emotions of people in workplace or educational institution settings?
Examples: monitoring employee frustration, analysing student engagement through facial expressions.
Is the system used for medical or safety reasons?
Examples: detecting driver fatigue, monitoring patient distress in a medical setting.
PROVISION 7 ¡ Art. 5(1)(g)
Does the system categorise people based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation?
Is this for labelling or filtering of lawfully acquired biometric datasets in a law enforcement context?
The law enforcement exception is narrow and subject to strict safeguards.
PROVISION 8 ¡ Art. 5(1)(h)
Does the system perform real-time remote biometric identification (e.g. live facial recognition) in publicly accessible spaces?
Is it used by law enforcement for a narrow exception (searching for missing persons, preventing a terrorist attack, or identifying suspects in serious crimes)?

Article 5 Self-Check — Summary

This is an informational tool. Results reflect the answers provided and do not constitute legal advice. Companies typically engage qualified EU AI Act counsel before acting on the results of any self-assessment.

Not prohibited? The next question is which risk tier.

Most AI systems do not fall under Article 5. Your actual compliance obligations depend on which risk tier applies: unacceptable, high-risk, limited, or minimal. The EU AI Act Assessment uses Annex III, Article 50, and GPAI rules to give you a full classification plus a phased compliance roadmap.

Run the EU AI Act Assessment →
EU AI Act Assessment →