🤖 AI Governance

EU AI Act ✦ Free Overview

Regulation (EU) 2024/1689 is the world's first comprehensive legal framework for artificial intelligence. If your organisation uses, develops, or deploys AI systems in any professional context, you have compliance obligations — most businesses don't yet know where they stand.

Regulation (EU) 2024/1689
Published August 2024
Risk-based approach
Classify your AI systems under the EU AI Act Risk level, applicable obligations, and immediate actions — free, no signup required.
AI Act Risk Classifier → Compliance Roadmap

The world's first comprehensive AI law

The EU AI Act, formally Regulation (EU) 2024/1689, was adopted by the European Parliament in March 2024 and entered into force on 1 August 2024. It is the world's first binding, comprehensive legal framework for artificial intelligence — establishing rules for how AI systems can be developed, placed on the market, and used within the European Union.

The AI Act takes a risk-based approach: it classifies AI systems into four risk tiers, and the obligations imposed scale with the potential harm the system could cause. Some AI applications are banned outright. Others — in high-risk domains like recruitment, education, law enforcement, and healthcare — face significant pre-market requirements. Many AI systems face transparency obligations only. And the vast majority fall into a minimal-risk category with no mandatory obligations.

The regulation covers the entire AI value chain. Obligations fall on providers (organisations that develop AI systems and place them on the market or put them into service), deployers (organisations that use AI systems in a professional context), importers, and distributors. A company that buys an off-the-shelf AI tool and uses it to screen job applications is a deployer with obligations. A company that fine-tunes a foundation model and sells it to customers is a provider with more extensive obligations.

Critically, the AI Act also includes specific provisions for General Purpose AI (GPAI) models — large foundation models like GPT-4, Gemini, or Llama — placing obligations on the organisations that develop and make these models available, including transparency requirements and, for the most capable models (above 10^25 FLOPs training compute), a full set of systemic risk management obligations.

Providers, deployers, and everyone in between

The AI Act applies to organisations in four roles:

  • Providers — develop an AI system or GPAI model and place it on the EU market or put it into service in the EU, regardless of whether they are established in the EU or a third country. Providers bear the heaviest obligations, including conformity assessments, technical documentation, and CE marking for high-risk systems.
  • Deployers — use an AI system under their authority in a professional context. Where deployers use high-risk AI systems, they have obligations including conducting fundamental rights impact assessments, ensuring appropriate human oversight, and informing individuals when interacting with AI.
  • Importers — place on the EU market AI systems developed by providers established outside the EU. Importers must verify that the provider has completed the conformity assessment and take corrective action for non-compliant systems.
  • Distributors — make AI systems available on the EU market without placing them on the market (i.e. resellers). Distributors must verify compliance before distribution and take corrective action if they believe the system does not comply.

The regulation applies extra-territorially: any organisation providing AI systems to EU users, or where the output of the AI system is used in the EU, falls within scope.

The four-level risk classification

Unacceptable Risk — Prohibited

AI applications that pose an unacceptable threat to fundamental rights are banned entirely. This includes: social scoring systems by public authorities, real-time remote biometric identification in publicly accessible spaces (with narrow exceptions for law enforcement), AI that exploits vulnerabilities of specific groups, subliminal manipulation, and predictive policing based solely on profiling.

High Risk — Regulated

AI systems in critical sectors or with significant impacts on fundamental rights. Defined in Annex III: biometric identification, critical infrastructure, education, employment (CV screening, candidate assessment, monitoring), essential services (credit scoring, insurance), law enforcement, migration, and administration of justice. High-risk systems require conformity assessment, CE marking, technical documentation, human oversight, accuracy and robustness testing, and registration in the EU database.

Limited Risk — Transparency Obligations

AI systems with specific transparency risks — primarily chatbots and systems that generate synthetic content. Deployers must inform users that they are interacting with an AI system. AI-generated content (deepfakes, synthetic audio, video) must be labelled. GPAI models must disclose that content is AI-generated.

Minimal Risk — No Mandatory Obligations

The vast majority of AI systems: spam filters, recommendation engines, AI in video games, inventory management tools. No mandatory obligations apply, though voluntary codes of conduct are encouraged. Organisations may choose to follow voluntary commitments under AI Act codes of practice.

Terms you need to understand

AI System (definition)

A machine-based system designed to operate with varying levels of autonomy that infers outputs such as predictions, recommendations, decisions, or content from inputs it receives. The definition is technology-neutral and intentionally broad.

General Purpose AI (GPAI) Model

An AI model trained on broad data at scale that is capable of competently performing a wide range of distinct tasks and can be integrated into various downstream applications — regardless of how it is deployed.

Conformity Assessment

The process by which a provider demonstrates that a high-risk AI system meets the requirements of the AI Act before placing it on the market. Most high-risk systems can self-certify; some (biometric systems for law enforcement) require third-party assessment.

Human Oversight

High-risk AI systems must be designed to be effectively overseen by natural persons. Deployers must ensure that humans with the competence and authority to override or stop the system are in place and capable of doing so.

Fundamental Rights Impact Assessment

Deployers of high-risk AI systems in areas covered by Art. 27 must conduct an assessment of the potential impact on fundamental rights before deploying the system, and register the results in the EU database.

CE Marking

High-risk AI systems must bear the CE conformity marking — the same marking used for other regulated products in the EU — before being placed on the market. Affixing CE marking without completing the conformity assessment is a violation.

Phased application dates

Aug 2024

Regulation enters into force

The EU AI Act is published in the Official Journal and enters into force. The 24-month and 36-month transition periods for different provisions begin running.

Feb 2025

Prohibited practices in force

Chapter II prohibitions on unacceptable-risk AI applications become applicable. Organisations using banned AI practices (social scoring, certain biometric surveillance, subliminal manipulation) must cease these activities immediately.

Aug 2025

GPAI model obligations in force

Rules for General Purpose AI models (Chapter V) become applicable, including transparency requirements and systemic risk obligations for the most capable models. AI Office established and enforcement begins for GPAI.

Aug 2026

High-risk system obligations in force

The full framework for high-risk AI systems (Annex III) becomes applicable. Providers must complete conformity assessments, technical documentation, and registration. Deployers must implement oversight and conduct fundamental rights impact assessments.

Aug 2027

High-risk AI in regulated products

High-risk AI systems that are safety components of products covered by existing EU harmonisation legislation (medical devices, machinery, vehicles) must comply with the AI Act requirements at this extended deadline.

What high-risk providers must do

  • Establish a risk management system (Art. 9) — an ongoing iterative process identifying, analysing, and mitigating risks associated with the AI system throughout its entire lifecycle.
  • Implement data governance measures (Art. 10) — training, validation, and testing datasets must be relevant, representative, free of errors, and complete; potential biases must be identified and mitigated.
  • Prepare technical documentation (Art. 11 + Annex IV) — comprehensive documentation before market placement covering system design, intended purpose, performance metrics, and risk management measures.
  • Enable logging and record-keeping (Art. 12) — high-risk AI systems must have logging capabilities that ensure traceability throughout the system's lifetime, allowing post-market monitoring.
  • Complete conformity assessment and CE marking (Art. 43–44) — before placing a high-risk system on the EU market, demonstrate compliance through the applicable conformity assessment procedure and affix the CE marking.
◈ Premium Deep-Dive

Annex III categories, GPAI obligations, sector implications, penalties & full checklist

Everything you need to move from understanding the AI Act to complying with it.

High-Risk Categories (Annex III) in Detail

Annex III lists the eight areas where AI systems are automatically classified as high-risk, irrespective of how the provider markets them. Any AI system falling into these categories triggers the full provider and deployer obligations:

  • Biometric categorisation and real-time/post-remote biometric identification systems (with limited law enforcement exceptions)
  • AI in critical infrastructure management — electricity, water, transport network management systems
  • Educational and vocational training — systems determining access, assigning students, evaluating learning outcomes, assessing examinations
  • Employment, worker management, and access to self-employment — recruitment tools, CV screening, candidate ranking, promotion and termination decisions, performance monitoring
  • Access to essential private and public services — credit scoring, life and health insurance risk assessment, emergency services dispatch
  • Law enforcement — assessing the reliability of evidence, predicting offences based on profiling, polygraph-type tools
  • Migration, asylum, and border control management — risk assessments, document authenticity checking, applications processing
  • Administration of justice and democratic processes — research assistance to courts, influencing elections

GPAI Model Obligations

Organisations developing or distributing General Purpose AI models (regardless of whether those models are also high-risk AI systems) must comply with Chapter V requirements from August 2025. All GPAI model providers must: draw up and maintain technical documentation; provide information and documentation to downstream providers; publish a summary of training data; comply with EU copyright law; and cooperate with the AI Office.

GPAI models with "systemic risk" — defined as those trained with more than 10^25 FLOPs — face additional obligations: adversarial testing (red-teaming), incident reporting to the Commission within two working days of serious incidents, cybersecurity measures, and energy efficiency reporting. Current models in scope include the largest versions of GPT, Gemini, Claude, and Llama.

Conformity Assessment Process

For most high-risk AI systems (Annex III, categories 2–8), providers can conduct a self-assessment — also called an internal conformity assessment. The process involves verifying that the system meets all requirements of Chapter III Section 2 (Arts. 9–15), drawing up the EU Declaration of Conformity, and maintaining records for 10 years after market placement. For certain biometric identification systems and law enforcement applications, third-party conformity assessment bodies (Notified Bodies) must be involved.

Sector Implications

Human Resources

Any AI system used for CV screening, candidate ranking, automated interview assessment, or employee performance monitoring falls under Annex III high-risk classification. Employers must implement meaningful human oversight, document the system's intended purpose and limitations, and inform candidates that AI tools are used in selection. Workers must also be informed when AI systems monitor their performance.

Healthcare

AI used as safety components of medical devices (diagnostic imaging AI, clinical decision support tools) falls under the extended August 2027 deadline under existing medical device regulation. AI used for patient triage, treatment recommendations, or hospital resource allocation that is not a medical device is potentially high-risk under Annex III (critical infrastructure) or limited risk.

Financial Services

Credit scoring AI systems — used for consumer lending, mortgage assessments, and insurance pricing — are explicitly listed in Annex III. Banks and insurers deploying these tools must complete fundamental rights impact assessments, implement human review processes for adverse decisions, and maintain logs of all automated outputs. The interaction with GDPR Art. 22 (no solely automated decisions) creates overlapping obligations.

Legal Sector

AI used to assist courts in researching and interpreting facts and law, or predicting case outcomes, is high-risk under Annex III. Law firms using AI for research assistance are likely deployers with oversight obligations, though most current legal research tools are designed as assistants without autonomous decision-making — which may place them outside high-risk scope if designed carefully.

Penalty Structure

The AI Act establishes the highest penalties of any EU regulation, reflecting the seriousness with which the EU regards AI risks:

  • Violations of prohibited AI practices (Chapter II): up to €35 million or 7% of global annual turnover (whichever is higher)
  • Violations of other AI Act provisions: up to €15 million or 3% of global annual turnover
  • Providing incorrect, incomplete, or misleading information to authorities: up to €7.5 million or 1% of global annual turnover
  • For SMEs and startups, the percentages may be applied where lower than the absolute figures

Full Compliance Checklist

1
Create an inventory of all AI systems used or developed by your organisation — including third-party tools and models integrated into your products
2
Classify each AI system by risk tier (prohibited, high-risk Annex III, limited risk, minimal risk) using the AI Act framework
3
Verify immediately that none of your AI systems or use cases fall under the Chapter II prohibited practices (effective from February 2025)
4
For each high-risk system where you are the provider: establish a risk management system (Art. 9) and document it
5
For each high-risk system: prepare technical documentation (Annex IV) covering design, intended purpose, performance, and risk mitigation measures
6
Implement logging and record-keeping mechanisms for all high-risk AI systems to ensure traceability
7
For high-risk systems where you are a deployer: conduct a Fundamental Rights Impact Assessment (Art. 27) before deployment
8
Establish human oversight procedures — identify who can override or stop the AI system, ensure they have the competence and authority to do so
9
For chatbots and systems generating synthetic content: implement user notification systems confirming AI interaction and content labelling
10
If you provide GPAI models: prepare technical documentation, publish training data summaries, and implement copyright compliance measures (from August 2025)
11
Assign AI Act responsibility — designate an AI governance lead or team to own the compliance programme and maintain records
12
Register high-risk AI systems in the EU database (Article 49) before market placement or deployment (from August 2026)

How to Use the Verdaio AI Act Tools

The EU AI Act Risk Classifier guides you through a structured questionnaire to classify your AI system by risk tier and identify the specific obligations that apply — free, no signup required. The AI Act Compliance Roadmap is the premium deep-dive: it maps your specific high-risk system against the full set of Arts. 9–15 obligations, generates a technical documentation checklist, and produces a phased compliance roadmap ahead of the August 2026 deadline.

AI Act Risk Classifier (free) →  |  Compliance Roadmap (premium) →

Unlock the full EU AI Act guide

Get the complete Annex III breakdown, GPAI obligations, sector implications, penalty structure, and 12-point compliance checklist.

Unlock full guide — €990/year

Full access to all 6 learning guides + all premium tools