The EU AI Act · What You Need to Know

The EU AI Act. What it is, who it affects, what it requires.

A plain-language guide to the regulation reshaping how European organizations deploy AI.

Status In force since August 2024
Full application August 2026 (delay under negotiation)
Scope All 27 EU member states
Coverage Providers and deployers
01 / 06

The first comprehensive legal framework for AI.

The EU AI Act was adopted in 2024 and entered into force on 1 August 2024. It is the first comprehensive legal framework for artificial intelligence anywhere in the world. The regulation applies across all 27 EU member states and governs how AI systems are developed, deployed, and used. It applies to organizations inside the EU and to organizations outside the EU whose AI systems affect people within it. The Act follows a risk-based approach: the higher the risk of an AI system, the stricter the obligations on the organization providing or using it.

02 / 06

Four levels of risk. Four levels of obligation.

The Act classifies every AI system into one of four risk tiers. Your obligations depend on which tier applies.

Risk Level Regulatory Status Examples
Unacceptable Banned Social scoring systems, real-time biometric surveillance in public spaces, manipulative AI exploiting vulnerabilities
High risk Strict obligations AI in HR and hiring, credit scoring, education assessment, critical infrastructure, law enforcement, migration control
Limited risk Transparency obligations Chatbots, deepfake generators, AI-generated content systems
Minimal risk No specific obligations Spam filters, AI-powered games, inventory optimization
Unacceptable
Banned
Social scoring systems, real-time biometric surveillance in public spaces, manipulative AI exploiting vulnerabilities
High risk
Strict obligations
AI in HR and hiring, credit scoring, education assessment, critical infrastructure, law enforcement, migration control
Limited risk
Transparency obligations
Chatbots, deepfake generators, AI-generated content systems
Minimal risk
No specific obligations
Spam filters, AI-powered games, inventory optimization
03 / 06

High-risk is where most enterprise AI lives.

The High-risk category is the largest in practice. The European Commission estimates that 5–15% of AI systems will fall into it. Annex III of the Act lists the specific use cases that qualify as high-risk. If your AI system operates in any of these domains, the full weight of the regulation applies to you.

01

Employment and workforce management

Recruitment, CV screening, performance evaluation, task allocation, promotion or termination decisions.

02

Education and vocational training

Admissions decisions, exam scoring, detecting prohibited behavior, evaluating learning outcomes.

03

Access to essential services

Credit scoring, insurance pricing, eligibility assessment for public benefits, emergency service dispatch.

04

Law enforcement

Predictive policing, polygraph and deception analysis, evidence evaluation, crime analytics.

05

Migration, asylum, border control

Visa eligibility, risk assessment of entrants, document verification, identification of persons.

06

Administration of justice

Legal research assistance, judicial decision support, interpretation of facts or law.

07

Critical infrastructure

Safety components in road traffic, water, gas, heating, electricity supply systems.

08

Biometric identification and categorisation

Remote biometric identification, categorisation by sensitive attributes, emotion recognition outside narrow exceptions.

04 / 06

Six things every high-risk AI system must have.

If your AI system is classified as high-risk, these six obligations apply. Non-compliance fines reach €15M or 3% of global turnover.

01

Risk Management System

Continuous, systematic process to identify, analyze, and mitigate risks throughout the AI system’s lifecycle. Applies before deployment and throughout operation. Must be documented and kept up to date.

02

Data Governance

Training and testing data must be relevant, representative, complete, and free from known errors. Data biases must be examined and mitigated. Applies to personal and non-personal data.

03

Technical Documentation

Comprehensive documentation of the AI system — architecture, data flows, model choices, performance metrics, known limitations. Must be sufficient for regulators to assess compliance.

04

Transparency and Information to Deployers

Clear information about the AI system’s capabilities, limitations, intended purpose, and the level of accuracy and robustness users can expect. Instructions for use must be provided to every deployer.

05

Human Oversight

Natural persons must be able to oversee, interpret, intervene in, and override AI decisions. Training and authority to do so must be provided. System design must enable effective human control.

06

Accuracy, Robustness, Cybersecurity

AI systems must reach an appropriate level of accuracy and robustness. They must be resilient against errors, failures, and attempts to manipulate their outputs. Documentation and monitoring required.

05 / 06

Timeline and current status.

Current as of April 2026. Timeline subject to ongoing Digital Omnibus negotiations.

The Act’s provisions apply in phases. Some obligations are already enforceable. Others take effect progressively. A Digital Omnibus package currently in trilogue negotiation would delay the high-risk obligations deadline. Until the revised deadline is published in the EU Official Journal, organizations should plan against the original August 2026 date. Legal analysts following the trilogue project a political agreement as early as late April 2026, with formal adoption to follow.

01 August 2024
Entry into force

The Act entered the EU legal order. Obligations began phasing in from this date.

02 February 2025
Prohibited practices banned

AI systems with unacceptable risk — social scoring, manipulative AI, untargeted biometric scraping — became illegal.

02 August 2025
General-purpose AI obligations

Providers of GPAI models became subject to transparency, documentation, and copyright compliance obligations.

02 August 2026 (Under negotiation)
High-risk AI obligations (original date)

Full obligations for high-risk AI systems currently scheduled to take effect. Digital Omnibus proposes a delay to December 2027. Until Official Journal publication, this date remains legally operative.

02 December 2027 (Proposed)
High-risk AI obligations (if delay is adopted)

If the Digital Omnibus is adopted, high-risk obligations would apply from this date instead of August 2026.

02 August 2027
Full application

Remaining provisions of the Act (including additional transparency and governance rules) apply in full.

06 / 06

How DataWeavrs maps to each obligation.

Every EU AI Act obligation for high-risk systems has a corresponding capability in WeavrCore or our delivery methodology. Not bolted on. Built in.

Obligation DataWeavrs Capability
Risk Management System Risk assessment is the first deliverable of Step 01 of our methodology (Business Value Workshop).
Data Governance The 3-tier classification engine (Private / Enrichable / Public) enforces data handling rules at every interaction.
Technical Documentation WeavrCore generates system documentation — architecture, data flows, model choices — as a byproduct of deployment.
Transparency Every AI interaction is logged with source attribution and retained per your data retention policy.
Human Oversight Role-based access controls include human-in-the-loop for high-risk decisions, configurable per use case.
Accuracy & Robustness RAG with verified source attribution and hallucination monitoring on every response.
Risk Management System
Risk assessment is the first deliverable of Step 01 of our methodology (Business Value Workshop).
Data Governance
The 3-tier classification engine (Private / Enrichable / Public) enforces data handling rules at every interaction.
Technical Documentation
WeavrCore generates system documentation — architecture, data flows, model choices — as a byproduct of deployment.
Transparency
Every AI interaction is logged with source attribution and retained per your data retention policy.
Human Oversight
Role-based access controls include human-in-the-loop for high-risk decisions, configurable per use case.
Accuracy & Robustness
RAG with verified source attribution and hallucination monitoring on every response.

Know where you stand in under 10 minutes.

The DataWeavrs EU AI Act Readiness Assessment gives you a scored view of your current exposure, plus a tailored diagnosis of where the biggest gaps sit. No meetings. No sales call. A PDF report in your inbox.

12 questions. Takes under 10 minutes. Your answers are used only to generate your report.