DataGovern
Automatic AI risk classification, technical documentation and continuous compliance. On-premise, with open-source LLMs.
Discover DataGovern →Admina Enterprise
Open AI governance for production applications: audit trail, PII redaction, bidirectional policies. AI Act native.
Discover Admina Enterprise →
Artificial Intelligence
EU AI Act consulting: system classification, policy definition, AI governance, training.
Discover →Background
The EU AI Act (Regulation EU 2024/1689) is the world’s first comprehensive regulatory framework for artificial intelligence. It entered into force on 1 August 2024, with a phased rollout:
- February 2025: prohibition of banned AI practices (social scoring, subliminal manipulation, mass biometric surveillance)
- August 2025: obligations for general-purpose AI models (GPAI) and governance structures
- August 2026: main obligations for high-risk AI systems (Annex III)
- August 2027: obligations for AI in regulated products (Annex I)
Penalties are among the highest in the European regulatory landscape: up to 7% of global turnover for prohibited practices, 3% for non-compliant high-risk systems, 1% for providing false information to authorities.
What companies need to do
1. AI system inventory
The first step is identifying every system that uses AI, including those provided by third parties (SaaS, APIs, embedded models). According to Deloitte (2024), fewer than 25% of organisations have completed this census.
2. Risk classification
Each system must be categorised on the scale defined by the regulation:
- Prohibited: social scoring, manipulation, mass biometric surveillance
- High risk: AI systems in healthcare, credit scoring, HR, justice, critical infrastructure
- Limited risk: chatbots and systems with transparency obligations
- Minimal risk: most AI systems — no specific obligations
3. Conformity assessment
For high-risk systems, organisations must prepare:
- Complete technical documentation
- Risk management system
- Training data quality requirements
- Transparency and human oversight
- Accuracy, robustness and cybersecurity
4. Internal governance
The organisation must define:
- Internal policies on AI use
- Roles and responsibilities (who oversees AI systems)
- Staff training
- Post-deployment monitoring procedures
Overlaps with GDPR and NIS2
The AI Act does not exist in isolation. It overlaps with:
- GDPR: protection of personal data used for training, Data Protection Impact Assessment (DPIA), data subject rights
- NIS2: cybersecurity of AI systems in critical infrastructure, incident reporting, risk management
McKinsey (2024) estimates that managing the three regulations in silos costs 40-60% more than an integrated approach. With a unified platform, the incremental cost drops to 10-15%.
Key deadlines
| Deadline | Obligation |
|---|---|
| Feb 2025 | Prohibited AI practices |
| Aug 2025 | GPAI + governance |
| Aug 2026 | High-risk systems (Annex III) |
| Aug 2027 | Regulated products (Annex I) |
Sources: Regulation EU 2024/1689 (EU AI Act), PwC EU AI Act Survey 2024, Deloitte State of AI 2024, McKinsey “The compliance convergence” 2024.