The world's first comprehensive AI law
The EU AI Act, formally Regulation (EU) 2024/1689, was adopted by the European Parliament in March 2024 and entered into force on 1 August 2024. It is the world's first binding, comprehensive legal framework for artificial intelligence — establishing rules for how AI systems can be developed, placed on the market, and used within the European Union.
The AI Act takes a risk-based approach: it classifies AI systems into four risk tiers, and the obligations imposed scale with the potential harm the system could cause. Some AI applications are banned outright. Others — in high-risk domains like recruitment, education, law enforcement, and healthcare — face significant pre-market requirements. Many AI systems face transparency obligations only. And the vast majority fall into a minimal-risk category with no mandatory obligations.
The regulation covers the entire AI value chain. Obligations fall on providers (organisations that develop AI systems and place them on the market or put them into service), deployers (organisations that use AI systems in a professional context), importers, and distributors. A company that buys an off-the-shelf AI tool and uses it to screen job applications is a deployer with obligations. A company that fine-tunes a foundation model and sells it to customers is a provider with more extensive obligations.
Critically, the AI Act also includes specific provisions for General Purpose AI (GPAI) models — large foundation models like GPT-4, Gemini, or Llama — placing obligations on the organisations that develop and make these models available, including transparency requirements and, for the most capable models (above 10^25 FLOPs training compute), a full set of systemic risk management obligations.
Providers, deployers, and everyone in between
The AI Act applies to organisations in four roles:
- Providers — develop an AI system or GPAI model and place it on the EU market or put it into service in the EU, regardless of whether they are established in the EU or a third country. Providers bear the heaviest obligations, including conformity assessments, technical documentation, and CE marking for high-risk systems.
- Deployers — use an AI system under their authority in a professional context. Where deployers use high-risk AI systems, they have obligations including conducting fundamental rights impact assessments, ensuring appropriate human oversight, and informing individuals when interacting with AI.
- Importers — place on the EU market AI systems developed by providers established outside the EU. Importers must verify that the provider has completed the conformity assessment and take corrective action for non-compliant systems.
- Distributors — make AI systems available on the EU market without placing them on the market (i.e. resellers). Distributors must verify compliance before distribution and take corrective action if they believe the system does not comply.
The regulation applies extra-territorially: any organisation providing AI systems to EU users, or where the output of the AI system is used in the EU, falls within scope.
The four-level risk classification
Unacceptable Risk — Prohibited
AI applications that pose an unacceptable threat to fundamental rights are banned entirely. This includes: social scoring systems by public authorities, real-time remote biometric identification in publicly accessible spaces (with narrow exceptions for law enforcement), AI that exploits vulnerabilities of specific groups, subliminal manipulation, and predictive policing based solely on profiling.
High Risk — Regulated
AI systems in critical sectors or with significant impacts on fundamental rights. Defined in Annex III: biometric identification, critical infrastructure, education, employment (CV screening, candidate assessment, monitoring), essential services (credit scoring, insurance), law enforcement, migration, and administration of justice. High-risk systems require conformity assessment, CE marking, technical documentation, human oversight, accuracy and robustness testing, and registration in the EU database.
Limited Risk — Transparency Obligations
AI systems with specific transparency risks — primarily chatbots and systems that generate synthetic content. Deployers must inform users that they are interacting with an AI system. AI-generated content (deepfakes, synthetic audio, video) must be labelled. GPAI models must disclose that content is AI-generated.
Minimal Risk — No Mandatory Obligations
The vast majority of AI systems: spam filters, recommendation engines, AI in video games, inventory management tools. No mandatory obligations apply, though voluntary codes of conduct are encouraged. Organisations may choose to follow voluntary commitments under AI Act codes of practice.
Terms you need to understand
AI System (definition)
A machine-based system designed to operate with varying levels of autonomy that infers outputs such as predictions, recommendations, decisions, or content from inputs it receives. The definition is technology-neutral and intentionally broad.
General Purpose AI (GPAI) Model
An AI model trained on broad data at scale that is capable of competently performing a wide range of distinct tasks and can be integrated into various downstream applications — regardless of how it is deployed.
Conformity Assessment
The process by which a provider demonstrates that a high-risk AI system meets the requirements of the AI Act before placing it on the market. Most high-risk systems can self-certify; some (biometric systems for law enforcement) require third-party assessment.
Human Oversight
High-risk AI systems must be designed to be effectively overseen by natural persons. Deployers must ensure that humans with the competence and authority to override or stop the system are in place and capable of doing so.
Fundamental Rights Impact Assessment
Deployers of high-risk AI systems in areas covered by Art. 27 must conduct an assessment of the potential impact on fundamental rights before deploying the system, and register the results in the EU database.
CE Marking
High-risk AI systems must bear the CE conformity marking — the same marking used for other regulated products in the EU — before being placed on the market. Affixing CE marking without completing the conformity assessment is a violation.
Phased application dates
Regulation enters into force
The EU AI Act is published in the Official Journal and enters into force. The 24-month and 36-month transition periods for different provisions begin running.
Prohibited practices in force
Chapter II prohibitions on unacceptable-risk AI applications become applicable. Organisations using banned AI practices (social scoring, certain biometric surveillance, subliminal manipulation) must cease these activities immediately.
GPAI model obligations in force
Rules for General Purpose AI models (Chapter V) become applicable, including transparency requirements and systemic risk obligations for the most capable models. AI Office established and enforcement begins for GPAI.
High-risk system obligations in force
The full framework for high-risk AI systems (Annex III) becomes applicable. Providers must complete conformity assessments, technical documentation, and registration. Deployers must implement oversight and conduct fundamental rights impact assessments.
High-risk AI in regulated products
High-risk AI systems that are safety components of products covered by existing EU harmonisation legislation (medical devices, machinery, vehicles) must comply with the AI Act requirements at this extended deadline.
What high-risk providers must do
- Establish a risk management system (Art. 9) — an ongoing iterative process identifying, analysing, and mitigating risks associated with the AI system throughout its entire lifecycle.
- Implement data governance measures (Art. 10) — training, validation, and testing datasets must be relevant, representative, free of errors, and complete; potential biases must be identified and mitigated.
- Prepare technical documentation (Art. 11 + Annex IV) — comprehensive documentation before market placement covering system design, intended purpose, performance metrics, and risk management measures.
- Enable logging and record-keeping (Art. 12) — high-risk AI systems must have logging capabilities that ensure traceability throughout the system's lifetime, allowing post-market monitoring.
- Complete conformity assessment and CE marking (Art. 43–44) — before placing a high-risk system on the EU market, demonstrate compliance through the applicable conformity assessment procedure and affix the CE marking.
Annex III categories, GPAI obligations, sector implications, penalties & full checklist
Everything you need to move from understanding the AI Act to complying with it.