Full Roadmap Β· EU AI Act

AI Act compliance
roadmap for your system

Get a full obligation checklist, technical documentation guide, conformity assessment pathway, and phased implementation roadmap β€” tailored to your AI system and risk level.

Art. 9–15 obligations
Technical docs checklist
Conformity assessment guide
~8 minutes
Company & system Step 1 of 4
Step 1 of 4

Your company & AI system

Tell us about your company and the AI system you need to make compliant with the EU AI Act.

Step 2 of 4

Risk classification

Select the risk level for your AI system. If you've already used our free AI Act Classifier, use the result from there. Otherwise, choose the level that best describes your system.

Step 3 of 4

Current compliance status

Assess where you are today against the key AI Act obligations. Be honest β€” the more accurate your responses, the more useful your roadmap.

Risk Management
A continuous risk management system identifying, estimating, evaluating and mitigating risks throughout the lifecycle.
A documented risk management system for the AI system has been established and is maintained.
Residual risks are evaluated and residual risk is documented as acceptable before deployment.
Training Data Governance
Requirements for data and data governance practices β€” relevance, representativeness, error-free, complete.
Training, validation, and testing datasets are documented β€” including their origin, scope, collection methods, and any known limitations.
Data has been examined for biases that could lead to discriminatory outputs, especially for categories protected under EU law.
Technical Documentation
Comprehensive technical documentation (Annex IV) must exist before market placement and be kept up to date.
Technical documentation covering the system's purpose, design, architecture, and validation exists and is maintained.
Automatic logging of system operation (events, decisions, anomalies) is implemented and logs are retained.
Transparency to Users
High-risk systems must be sufficiently transparent so deployers can interpret outputs and use them appropriately.
An Instructions for Use document exists describing the system's purpose, capabilities, limitations, and intended users.
Deployers (or end users) are informed when a decision is significantly influenced or made by the AI system.
Human Oversight
High-risk systems must allow human oversight to prevent or minimise risks to health, safety, or fundamental rights.
The system includes built-in tools enabling human operators to monitor, override, or halt its operation.
Staff operating the system are trained on its limitations and the circumstances requiring human intervention.
Accuracy & Robustness Testing
High-risk systems must achieve appropriate levels of accuracy and be resilient to errors, faults, and inconsistencies.
The system has been tested against defined accuracy, robustness, and cybersecurity metrics before deployment.
A post-market monitoring plan is in place to detect performance degradation or unexpected outputs after deployment.
Step 4 of 4

Get your compliance roadmap

Enter your email to receive the full AI Act compliance roadmap with obligation checklists, technical documentation guide, and phased implementation timeline.

Generating your compliance roadmap…

Mapping obligations to your specific system and risk level.

Analysing risk level & sector
Mapping Art. 9–15 obligations
Building technical documentation checklist
Assessing conformity assessment pathway
Creating phased implementation roadmap
AI Act Readiness
β€”
Analysing…
Compliance Obligations

Need help executing this roadmap?

Verdaio's experts can build your technical documentation, conduct your conformity assessment, and register your system in the EU database.

Talk to us β†’

This roadmap is AI-generated guidance only and does not constitute legal advice. Consult a qualified EU AI Act specialist for legal decisions.