Comprehensive Guide

EU AI Act Quick Reference Guide

Everything you need to know about the EU AI Act in one place. Understand risk categories, compliance obligations, and implementation timelines.

Key Compliance Dates
Entry Into Force August 2024
Prohibited AI Ban February 2025
GPAI Rules Apply August 2025
High-Risk AI Enforcement August 2027

What is the EU AI Act?

The EU AI Act is a risk-based regulatory framework that classifies AI systems into four categories based on the level of risk they pose to health, safety, and fundamental rights.

Prohibited
Unacceptable Risk

AI systems banned in the EU

High
Strict Compliance

Healthcare, education, employment, credit scoring, law enforcement

Limited
Transparency

Chatbots, emotion recognition, deepfakes, biometric categorization

Minimal
No Requirements

Spam filters, AI in video games, inventory management

Annex III: High-Risk AI Categories

Eight areas where AI systems are automatically classified as high-risk

1. Biometrics
  • Remote biometric identification
  • Biometric categorization based on sensitive attributes
  • Emotion recognition systems
2. Critical Infrastructure
  • Safety components of critical infrastructure
  • Road traffic management
  • Water, gas, electricity supply systems
3. Education & Training
  • Access determination to education institutions
  • Assessment of students
  • Proctoring and exam monitoring
4. Employment
  • Recruitment and candidate screening
  • Performance evaluation
  • Task allocation and monitoring
5. Essential Services
  • Credit scoring and creditworthiness
  • Insurance risk assessment
  • Emergency services dispatching
6. Law Enforcement
  • Risk assessment of natural persons
  • Polygraphs and similar tools
  • Crime analytics and prediction
7. Migration & Asylum
  • Visa application assessment
  • Asylum claim evaluation
  • Border control and security
8. Justice & Democracy
  • Judicial decision assistance
  • Electoral process influence
  • Legal research and case outcome prediction

High-Risk AI Compliance Requirements

Six core obligations for high-risk AI system providers

Risk Management

Establish and maintain a continuous risk management system throughout the AI lifecycle.

Data Governance

Implement data quality criteria for training, validation, and testing datasets with bias assessment.

Technical Documentation

Maintain comprehensive technical documentation demonstrating compliance before market placement.

Record Keeping

Enable automatic logging of events for traceability and post-market monitoring.

Transparency

Provide clear instructions for use and information about system capabilities and limitations.

Human Oversight

Design systems to be effectively overseen by natural persons during use.

Penalties for Non-Compliance

Substantial fines for violations of the EU AI Act

€35M
or 7% Global Revenue

For prohibited AI practices

€15M
or 3% Global Revenue

For high-risk AI non-compliance

€7.5M
or 1.5% Global Revenue

For providing incorrect information

Need Help with EU AI Act Compliance?

Governum provides a comprehensive compliance platform to help you navigate the EU AI Act requirements.