Free Assessment
EU AI Act & AI Governance

The world's first AI regulation is in force. Does your business know where it stands?

The EU AI Act classifies AI systems by risk level and imposes obligations accordingly. If your company uses, deploys, or develops AI systems, you may already have compliance obligations — and the enforcement timeline is moving fast.

Scroll
The Regulation

The first comprehensive legal framework for artificial intelligence.

The EU Artificial Intelligence Act is the world's first comprehensive legal framework specifically designed to regulate artificial intelligence. Adopted in 2024, it takes a risk-based approach — the stricter the potential harm an AI system can cause, the more stringent the requirements it must meet.

Unlike sector-specific regulations, the AI Act applies horizontally across industries and use cases. It affects not just AI developers but also businesses that deploy or use AI systems — which in practice means the vast majority of modern organisations.

Key Facts
Entered into force
August 2024
Scope
Applies across all sectors and industries
Approach
Risk-based — 4 levels of classification
High-risk AI deadline
Full compliance required by August 2026
Risk Levels

Four risk levels. Four sets of obligations.

Unacceptable Risk

Prohibited AI Practices

These AI systems are banned outright under the EU AI Act. No deployment, development, or use is permitted.

  • Social scoring systems operated by governments
  • Subliminal or manipulative AI techniques
  • Real-time remote biometric surveillance in public spaces
  • AI that exploits vulnerabilities of specific groups
High Risk

Significant Obligations

High-risk AI systems must meet strict requirements before being placed on the market or put into service.

  • AI in critical infrastructure and safety systems
  • AI used in education and employment decisions
  • AI in law enforcement and border control
  • AI in administration of justice
Limited Risk

Transparency Obligations

Users must be clearly informed when they are interacting with AI or consuming AI-generated content.

  • Chatbots and conversational AI systems
  • AI that generates synthetic images, audio, or video
  • Emotion recognition systems
Minimal / No Risk

Few or No Obligations

Most AI systems fall into this category. Voluntary codes of conduct may apply, but mandatory requirements are minimal.

  • Spam filters and email sorting tools
  • AI features in video games
  • Simple recommendation systems
High-Risk Obligations

What high-risk AI systems must comply with.

Requirement 01

Risk Management System

Establish and maintain a continuous risk management process covering the entire AI system lifecycle — from design through deployment and monitoring.

Requirement 02

Data Governance

Training data must be high quality, relevant, and representative. Bias management processes must be documented and applied throughout development.

Requirement 03

Technical Documentation

Comprehensive technical documentation must be prepared before the system is placed on the market, covering design, capabilities, limitations, and testing results.

Requirement 04

Transparency

Deployers and users must receive clear information about the system's capabilities, limitations, intended purpose, and any known risks or failure modes.

Requirement 05

Human Oversight

Systems must be designed to allow effective human oversight and intervention. Humans must be able to monitor, understand, and override AI outputs where necessary.

Requirement 06

Accuracy & Robustness

Maintain appropriate levels of accuracy throughout the lifecycle and ensure the system is resilient against errors and attempts to alter or manipulate outputs.

Scope

The AI Act applies to more businesses than you think.

🏗️

Providers

Companies that develop AI systems or place them on the EU market face the most stringent obligations — including conformity assessments, registration requirements, and post-market monitoring.

🖥️

Deployers

Companies that use AI systems in a professional context must ensure systems are used appropriately, monitor their performance, and maintain records. Most organisations are deployers under the Act.

📦

Importers & Distributors

Companies that bring non-EU AI systems into the EU market or make them available to others have specific obligations to verify that the AI complies before distribution.

💡

If your business uses AI tools — from recruitment software to customer service chatbots to fraud detection — you are likely a deployer under the AI Act. Most businesses don't yet know which obligations apply to them.

Deadlines

The AI Act enforcement schedule.

Aug
2024
Passed

Regulation Entered into Force

The EU AI Act was officially published and entered into force on 1 August 2024. The phased enforcement timeline began running from this date.

Feb
2025
Passed

Prohibited AI Practices Banned

Unacceptable-risk AI systems must have ceased all operation within the EU. Violations from this date onward are subject to enforcement action.

Aug
2025
Passed

GPAI Model Rules Apply

General Purpose AI model providers — including large foundation model developers — face new obligations around transparency, systemic risk assessment, and incident reporting.

Aug
2026
Upcoming

High-Risk AI — Full Compliance Required

All high-risk AI systems listed in Annex III of the AI Act must be fully compliant. This is the most significant deadline for the majority of businesses using or deploying AI.

Aug
2027
Upcoming

Legacy High-Risk Systems

AI systems already in use before the Act applied (legacy systems) must be updated to comply with all high-risk AI requirements by this extended deadline.

Find out which AI Act obligations apply to your system.

Start with the free EU AI Act Risk Classifier to get your risk level in minutes. Then go deeper with the full Compliance Roadmap — covering all Art. 9–15 obligations, technical documentation, and conformity assessment guidance.