The EU AI Act is the world's first comprehensive AI regulatory framework and applies to all organisations that deploy or offer AI systems in the EU market — regardless of company location. High-risk AI requirements take effect from August 2026: training data governance, model versioning, technical documentation, audit logging and human oversight mechanisms must be implemented and demonstrable. This article shows what cloud infrastructure is required and which AWS services technically address EU AI Act requirements.
Risk Classes of the EU AI Act: Where Does Your System Sit?
The EU AI Act (Regulation EU 2024/1689) classifies AI systems by their risk potential. Classification determines the compliance requirements:
- Unacceptable risk — prohibited AI practices (since February 2025)
- Biometric categorisation by sensitive characteristics, social scoring, manipulative AI, real-time remote biometrics in public spaces (with narrow exceptions). These practices are now prohibited.
- High risk — high-risk AI systems (from August 2026)
- AI in critical infrastructure, educational assessment, credit scoring, employment decisions, law enforcement, migration, medical devices. Comprehensive requirements apply: conformity assessment, CE marking, technical documentation, EU database registration.
- Limited risk — transparency obligations
- Chatbots, deepfakes, emotion recognition — must be labelled as AI-generated. No conformity assessment required.
- Minimal risk — no specific requirements
- AI spam filters, recommendation systems without risk classification — only general product safety rules apply.
Key Deadlines
| Date | What applies | Affected parties |
|---|---|---|
| August 2024 | EU AI Act enters into force | All |
| February 2025 | Prohibited AI practices applicable | All deploying AI in the EU market |
| August 2025 | GPAI model obligations (transparency, copyright) | Providers of large AI models |
| August 2026 | High-risk AI requirements fully applicable | Operators and providers of high-risk AI |
| August 2027 | AI embedded in regulated products (medical devices, machinery) | Manufacturers of regulated products |
Requirement 1: Training Data Governance on AWS
Article 10 of the EU AI Act requires strict governance over training, validation and test datasets for high-risk AI systems. Requirements: documentation of data provenance, relevance assessment, bias analysis, data protection compliance.
- Amazon S3 with Object Lock: Store training datasets immutably in S3 (Compliance Mode). This makes it verifiable at any time which exact data was used for which model version — a prerequisite for technical documentation under the AI Act.
- AWS Glue Data Catalog: Centrally catalogue metadata for all datasets: provenance, format, size, timestamp, privacy classification. Automatic data lineage tracking via AWS Glue ETL jobs.
- Amazon Macie: Automatic detection of personal data in S3 training datasets. Prevents GDPR-relevant data from entering training pipelines unnoticed.
- AWS Lake Formation: Fine-grained access controls at dataset level. Only authorised processes may read training data — fully auditable via CloudTrail.
Requirement 2: Model Versioning and Reproducibility
Article 12 of the EU AI Act requires that high-risk AI systems automatically generate logs and that information necessary for conformity assessment is reproducible. In practice: model versions must be immutably archived and linked to their training data.
The Amazon SageMaker Model Registry is the central AWS service for model governance:
- Model Registry
- All model versions are registered with metadata (training job ID, dataset ARN, hyperparameters, evaluation metrics). Each version is immutable — models cannot be overwritten. Status transitions (Pending → Approved → Deployed) are fully audited.
- SageMaker Experiments
- Tracking of all training runs with complete parameterisation, metrics and artefacts. Each experiment run is linked to the code used (via CodeCommit/GitHub versioning) and dataset — full reproducibility.
- SageMaker Pipelines
- Repeatable, versioned ML pipelines for data preprocessing, training, evaluation and deployment. Each pipeline execution is an audited artefact — ideal for technical documentation under AI Act Article 11.
Requirement 3: Audit Logging for AI Systems
| AI Act Requirement | AWS Service | Implementation |
|---|---|---|
| Inference request logging | Amazon SageMaker Model Monitor | Captures all input and output data from endpoints — configurable, immutably stored |
| API audit trail | AWS CloudTrail | All API calls to SageMaker, Bedrock, Rekognition logged — immutably archived in S3 |
| In-production anomaly detection | Amazon SageMaker Model Monitor | Data drift, model quality drift, bias drift — automatic alerts on deviations |
| Document human oversight | Amazon Augmented AI (A2I) | Human review workflows for edge cases — reviewer decisions fully audited |
| Access logs | AWS IAM + CloudTrail | Who accessed which model when — immutably stored |
Requirement 4: Implementing Human Oversight
Article 14 of the EU AI Act is particularly relevant in practice: high-risk AI systems must be designed so that human oversight is possible. Systems may not make fully automated decisions with serious consequences.
Amazon Augmented AI (A2I) is the AWS service specifically for human review workflows. Configurable with three conditions:
- Always review humanly: Every AI decision is presented to a human for confirmation — maximum oversight, relevant for highest risk categories.
- Review at low confidence: Only when the model falls below a defined threshold is human review triggered — balancing efficiency and oversight.
- Spot check: Random sampling of human review for quality assurance — documented for audits.
All A2I decisions are archived in S3 and retrievable via AWS Audit Manager. This creates proof that human oversight was actually exercised — not just technically possible.
Frequently Asked Questions About the EU AI Act
- What is a high-risk AI system under the EU AI Act?
- High-risk AI systems are AI applications in areas with significant impact on safety or fundamental rights: critical infrastructure, education assessment, credit scoring, law enforcement, migration. The strictest requirements apply: risk management, data quality, logging, human oversight and conformity assessment.
- Does the EU AI Act apply to GPAI models like GPT or Claude?
- Yes. GPAI models with more than 10^25 FLOPs of training compute are subject to specific transparency and safety obligations. Deployers of GPAI APIs may have additional obligations when using them in high-risk applications.
- What deadlines apply for the EU AI Act?
- Prohibited practices have applied since February 2025. GPAI model obligations since August 2025. High-risk AI requirements from August 2026. For AI in regulated products, a transition period applies until 2027.
- Must I register my AI system in an EU database?
- High-risk AI systems under Annex III (with some exceptions) must be registered in the EU AI database. Systems used exclusively internally and non-commercially have reduced requirements.
Storm Reply: AI Act Compliance on AWS
Storm Reply is AWS Premier Consulting Partner and AWS Generative AI Competency Launch Partner 2024. We support organisations with AI Act compliance: from risk classification of existing AI systems through building a compliant MLOps infrastructure to technical documentation for conformity assessments. Storm Reply — AWS Premier Consulting Partner DACH — translates regulatory AI requirements into production-ready AWS architectures.
Build AI Act-compliant infrastructure?
Storm Reply classifies your AI systems and implements the necessary AWS infrastructure for EU AI Act compliance.
Get in touch