The EU AI Act classifies AI systems by risk level and imposes obligations accordingly. If your company uses, deploys, or develops AI systems, you may already have compliance obligations — and the enforcement timeline is moving fast.
The EU Artificial Intelligence Act is the world's first comprehensive legal framework specifically designed to regulate artificial intelligence. Adopted in 2024, it takes a risk-based approach — the stricter the potential harm an AI system can cause, the more stringent the requirements it must meet.
Unlike sector-specific regulations, the AI Act applies horizontally across industries and use cases. It affects not just AI developers but also businesses that deploy or use AI systems — which in practice means the vast majority of modern organisations.
These AI systems are banned outright under the EU AI Act. No deployment, development, or use is permitted.
High-risk AI systems must meet strict requirements before being placed on the market or put into service.
Users must be clearly informed when they are interacting with AI or consuming AI-generated content.
Most AI systems fall into this category. Voluntary codes of conduct may apply, but mandatory requirements are minimal.
Establish and maintain a continuous risk management process covering the entire AI system lifecycle — from design through deployment and monitoring.
Training data must be high quality, relevant, and representative. Bias management processes must be documented and applied throughout development.
Comprehensive technical documentation must be prepared before the system is placed on the market, covering design, capabilities, limitations, and testing results.
Deployers and users must receive clear information about the system's capabilities, limitations, intended purpose, and any known risks or failure modes.
Systems must be designed to allow effective human oversight and intervention. Humans must be able to monitor, understand, and override AI outputs where necessary.
Maintain appropriate levels of accuracy throughout the lifecycle and ensure the system is resilient against errors and attempts to alter or manipulate outputs.
Companies that develop AI systems or place them on the EU market face the most stringent obligations — including conformity assessments, registration requirements, and post-market monitoring.
Companies that use AI systems in a professional context must ensure systems are used appropriately, monitor their performance, and maintain records. Most organisations are deployers under the Act.
Companies that bring non-EU AI systems into the EU market or make them available to others have specific obligations to verify that the AI complies before distribution.
If your business uses AI tools — from recruitment software to customer service chatbots to fraud detection — you are likely a deployer under the AI Act. Most businesses don't yet know which obligations apply to them.
The EU AI Act was officially published and entered into force on 1 August 2024. The phased enforcement timeline began running from this date.
Unacceptable-risk AI systems must have ceased all operation within the EU. Violations from this date onward are subject to enforcement action.
General Purpose AI model providers — including large foundation model developers — face new obligations around transparency, systemic risk assessment, and incident reporting.
All high-risk AI systems listed in Annex III of the AI Act must be fully compliant. This is the most significant deadline for the majority of businesses using or deploying AI.
AI systems already in use before the Act applied (legacy systems) must be updated to comply with all high-risk AI requirements by this extended deadline.
Start with the free EU AI Act Risk Classifier to get your risk level in minutes. Then go deeper with the full Compliance Roadmap — covering all Art. 9–15 obligations, technical documentation, and conformity assessment guidance.