EU AI Act Assessment

Classify your AI system and build your compliance roadmap

Describe your AI system and your current readiness. Get a full EU AI Act compliance report with risk classification, obligations, technical documentation checklist, and a phased implementation roadmap.

15 to 20 minutes
Classification + full roadmap
PDF + email delivery
Bilingual EN/PT
Prohibited
High Risk
Limited Risk
Minimal Risk

Based on: Regulation (EU) 2024/1689: EU AI Act

🧭
Paid Assessment

EU AI Act Assessment

Part 1 classifies your AI system under the EU AI Act. Part 2 assesses readiness against Art. 9 to 15 obligations and builds a phased implementation roadmap. One credit, one combined report.

Get access

Questions? Contact us

Classification inputs Step 1 of 3
Step 1 of 4

About your company

We use this to contextualise your AI Act obligations. Some sectors face stricter requirements.

Step 2 of 4

Describe your AI system

Be specific. The more detail you provide, the more accurate the classification.

e.g. Analyses CVs and scores candidates 1-100. Scores below 50 are automatically rejected.
Step 3 of 4

A few more questions

These help us classify your system accurately under the EU AI Act's risk tiers.

Next step: one final set of questions about your current readiness. Your credit is only used when the final report is generated.

🔒

Monthly limit reached

Step 4 of 4: Readiness

Current compliance status

Assess where you are today against the key AI Act obligations. Be honest: the more accurate your responses, the more useful your roadmap. All answers are saved locally for 7 days if you need to come back.

Risk Management
Continuous risk management across the AI lifecycle.
A documented risk management system for the AI system has been established and is maintained.
Residual risks are evaluated and documented as acceptable before deployment.
Training Data Governance
Data governance: relevance, representativeness, accuracy, completeness.
Training, validation, and testing datasets are documented (origin, scope, collection methods, limitations).
Data has been examined for biases that could lead to discriminatory outputs.
Technical Documentation
Technical documentation (Annex IV) before market placement.
Technical documentation covering purpose, design, architecture, and validation exists and is maintained.
Automatic logging of system operation is implemented and logs are retained.
Transparency to Users
High-risk systems must be sufficiently transparent to deployers.
An Instructions for Use document describes purpose, capabilities, limitations, and intended users.
Deployers or end users are informed when a decision is significantly influenced or made by the AI.
Human Oversight
Built-in controls to prevent or minimise risks to fundamental rights.
The system includes built-in tools enabling human operators to monitor, override, or halt its operation.
Staff operating the system are trained on its limitations and when human intervention is required.
Accuracy and Robustness
Appropriate accuracy and resilience to errors, faults, inconsistencies.
The system has been tested against defined accuracy, robustness, and cybersecurity metrics before deployment.
A post-market monitoring plan is in place to detect performance degradation or unexpected outputs.

Ephemeral AI processing (Claude by Anthropic). No data stored or used for training. Results do not constitute legal advice.

Building your compliance roadmap

Mapping obligations and readiness to your system and risk level.

This usually takes about a minute. Please don't close this page.

Analysing risk level and sector
Mapping Art. 9 to 15 obligations
Building technical documentation checklist
Assessing conformity pathway
Creating phased roadmap
AI Act Readiness
-
Analysing
Compliance Obligations
This roadmap covers the EU AI Act in full. Each obligation is tagged with its enforcement status.

AI-generated for guidance only. Not legal advice. See our Privacy Policy.

Informational use only. This tool is provided for awareness purposes. It does not constitute legal, regulatory, or professional advice. Results are indicative only. Verdaio accepts no liability for decisions made based on this tool's output. Your inputs are processed ephemerally and are not stored.