The EU AI Act, which entered into force in August 2024, creates a risk-tiered regulatory framework for artificial intelligence systems deployed in the European Union. Understanding which tier your AI system falls into is the first — and most consequential — compliance decision your organization will make.
The Risk Tiers
The EU AI Act divides AI systems into four risk categories: unacceptable risk (banned), high risk (heavily regulated), limited risk (transparency obligations only), and minimal risk (essentially unregulated). The compliance burden differs dramatically between tiers, so classification accuracy is critical.
Annex II: High-Risk Sectors
AI systems fall into the high-risk category when they are used as safety components in products governed by specific EU product safety legislation listed in Annex II. This includes machinery, medical devices, civil aviation systems, motor vehicles, marine equipment, rail systems, and agricultural equipment. If your AI system is embedded in a product regulated under any of these frameworks, it is presumed high-risk.
Annex III: High-Risk Use Cases
Annex III identifies eight specific use-case categories where AI deployment is deemed high-risk regardless of the sector in which it operates. These include biometric identification, critical infrastructure management, educational and vocational training, employment and worker management, access to essential services, law enforcement, migration and asylum, and the administration of justice. Each category has specific sub-criteria that determine whether a particular system qualifies.
Key Obligations for High-Risk Systems
High-risk AI systems face mandatory requirements across eight dimensions: risk management systems, data governance practices, technical documentation, record-keeping and logging, transparency and user information, human oversight provisions, accuracy and robustness requirements, and cybersecurity requirements. The most demanding of these — particularly for data teams — is the logging requirement, which mandates that high-risk systems maintain automatic logs of activity that enable post-market surveillance.
Implementation Timeline
High-risk AI systems defined under Annex III had an initial grace period expiring in August 2026. However, organizations placing new high-risk systems on the market after the regulation's entry into force are subject to compliance obligations immediately. The practical implication: if your organization is building a new AI system that falls into a high-risk category today, you must build compliance in from the start, not retrofit it later.
How AIClarum Helps
AIClarum's compliance automation layer includes pre-built templates for all eight Annex III categories. Our system automatically maps your model's characteristics to the relevant articles and generates the required technical documentation, risk management evidence, and logging records. Organizations using AIClarum reduce their EU AI Act compliance implementation time from months to days.