The EU AI Act. What it is, who it affects, what it requires.
A plain-language guide to the regulation reshaping how European organizations deploy AI.
The first comprehensive legal framework for AI.
The EU AI Act was adopted in 2024 and entered into force on 1 August 2024. It is the first comprehensive legal framework for artificial intelligence anywhere in the world. The regulation applies across all 27 EU member states and governs how AI systems are developed, deployed, and used. It applies to organizations inside the EU and to organizations outside the EU whose AI systems affect people within it. The Act follows a risk-based approach: the higher the risk of an AI system, the stricter the obligations on the organization providing or using it.
Four levels of risk. Four levels of obligation.
The Act classifies every AI system into one of four risk tiers. Your obligations depend on which tier applies.
| Risk Level | Regulatory Status | Examples |
|---|---|---|
| Unacceptable | Banned | Social scoring systems, real-time biometric surveillance in public spaces, manipulative AI exploiting vulnerabilities |
| High risk | Strict obligations | AI in HR and hiring, credit scoring, education assessment, critical infrastructure, law enforcement, migration control |
| Limited risk | Transparency obligations | Chatbots, deepfake generators, AI-generated content systems |
| Minimal risk | No specific obligations | Spam filters, AI-powered games, inventory optimization |
High-risk is where most enterprise AI lives.
The High-risk category is the largest in practice. The European Commission estimates that 5–15% of AI systems will fall into it. Annex III of the Act lists the specific use cases that qualify as high-risk. If your AI system operates in any of these domains, the full weight of the regulation applies to you.
Employment and workforce management
Recruitment, CV screening, performance evaluation, task allocation, promotion or termination decisions.
Education and vocational training
Admissions decisions, exam scoring, detecting prohibited behavior, evaluating learning outcomes.
Access to essential services
Credit scoring, insurance pricing, eligibility assessment for public benefits, emergency service dispatch.
Law enforcement
Predictive policing, polygraph and deception analysis, evidence evaluation, crime analytics.
Migration, asylum, border control
Visa eligibility, risk assessment of entrants, document verification, identification of persons.
Administration of justice
Legal research assistance, judicial decision support, interpretation of facts or law.
Critical infrastructure
Safety components in road traffic, water, gas, heating, electricity supply systems.
Biometric identification and categorisation
Remote biometric identification, categorisation by sensitive attributes, emotion recognition outside narrow exceptions.
Six things every high-risk AI system must have.
If your AI system is classified as high-risk, these six obligations apply. Non-compliance fines reach €15M or 3% of global turnover.
Risk Management System
Continuous, systematic process to identify, analyze, and mitigate risks throughout the AI system’s lifecycle. Applies before deployment and throughout operation. Must be documented and kept up to date.
Data Governance
Training and testing data must be relevant, representative, complete, and free from known errors. Data biases must be examined and mitigated. Applies to personal and non-personal data.
Technical Documentation
Comprehensive documentation of the AI system — architecture, data flows, model choices, performance metrics, known limitations. Must be sufficient for regulators to assess compliance.
Transparency and Information to Deployers
Clear information about the AI system’s capabilities, limitations, intended purpose, and the level of accuracy and robustness users can expect. Instructions for use must be provided to every deployer.
Human Oversight
Natural persons must be able to oversee, interpret, intervene in, and override AI decisions. Training and authority to do so must be provided. System design must enable effective human control.
Accuracy, Robustness, Cybersecurity
AI systems must reach an appropriate level of accuracy and robustness. They must be resilient against errors, failures, and attempts to manipulate their outputs. Documentation and monitoring required.
Timeline and current status.
The Act’s provisions apply in phases. Some obligations are already enforceable. Others take effect progressively. A Digital Omnibus package currently in trilogue negotiation would delay the high-risk obligations deadline. Until the revised deadline is published in the EU Official Journal, organizations should plan against the original August 2026 date. Legal analysts following the trilogue project a political agreement as early as late April 2026, with formal adoption to follow.
The Act entered the EU legal order. Obligations began phasing in from this date.
AI systems with unacceptable risk — social scoring, manipulative AI, untargeted biometric scraping — became illegal.
Providers of GPAI models became subject to transparency, documentation, and copyright compliance obligations.
Full obligations for high-risk AI systems currently scheduled to take effect. Digital Omnibus proposes a delay to December 2027. Until Official Journal publication, this date remains legally operative.
If the Digital Omnibus is adopted, high-risk obligations would apply from this date instead of August 2026.
Remaining provisions of the Act (including additional transparency and governance rules) apply in full.
How DataWeavrs maps to each obligation.
Every EU AI Act obligation for high-risk systems has a corresponding capability in WeavrCore or our delivery methodology. Not bolted on. Built in.
| Obligation | DataWeavrs Capability |
|---|---|
| Risk Management System | Risk assessment is the first deliverable of Step 01 of our methodology (Business Value Workshop). |
| Data Governance | The 3-tier classification engine (Private / Enrichable / Public) enforces data handling rules at every interaction. |
| Technical Documentation | WeavrCore generates system documentation — architecture, data flows, model choices — as a byproduct of deployment. |
| Transparency | Every AI interaction is logged with source attribution and retained per your data retention policy. |
| Human Oversight | Role-based access controls include human-in-the-loop for high-risk decisions, configurable per use case. |
| Accuracy & Robustness | RAG with verified source attribution and hallucination monitoring on every response. |
Know where you stand in under 10 minutes.
The DataWeavrs EU AI Act Readiness Assessment gives you a scored view of your current exposure, plus a tailored diagnosis of where the biggest gaps sit. No meetings. No sales call. A PDF report in your inbox.
12 questions. Takes under 10 minutes. Your answers are used only to generate your report.