EU AI Act Timeline 2026–2027: What Comes Into Force and When
Detailed EU AI Act implementation timeline: prohibitions from February 2025, GPAI from August 2025, high-risk systems from August 2026. Provider and deployer obligations.
EU AI Act Implementation Timeline: Key Dates
The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024, but its individual provisions apply progressively — according to a phased schedule spanning 2025 to 2027. Understanding this timeline is essential for providers and deployers of AI systems who must plan their compliance well in advance.
Phase 1: February 2025 — Prohibited AI Practices
From 2 February 2025, the prohibitions on AI systems classified as unacceptable under the AI Act became applicable. These cover practices such as:
- Subliminal manipulation: AI systems using subliminal techniques that distort a person's behaviour in a way likely to cause harm.
- Exploitation of vulnerabilities: Systems that deliberately exploit vulnerabilities arising from age, disability, or social situation.
- Social scoring: Systems evaluating the trustworthiness of individuals based on social behaviour, leading to unjustified or disproportionate treatment.
- Real-time biometric identification: Remote biometric identification systems in public spaces for law enforcement purposes (with narrow exceptions).
- Emotion inference: Systems inferring emotions of individuals in the workplace and educational institutions.
In the same month (February 2025), AI literacy obligations commenced — organisations must ensure their personnel have sufficient knowledge about the AI systems they work with (Art. 4 of the AI Act).
Phase 2: August 2025 — General-Purpose AI (GPAI) Models
From 2 August 2025, the rules governing General-Purpose AI (GPAI) models apply. This concerns providers of foundation models such as large language models (LLMs) and generative AI models.
GPAI model providers must comply with the following obligations:
- Prepare and maintain technical documentation (Art. 53) enabling the European Commission to assess compliance.
- Provide downstream AI system providers with sufficient information and documentation to meet their own compliance obligations.
- Implement a copyright compliance policy (Art. 53(1)(c)) — especially important for models trained on internet data.
- Publish a sufficiently detailed summary of the content used for training (Art. 53(1)(d)).
For GPAI models with systemic risk (training compute above 10²⁵ FLOP or designated by the Commission), additional requirements include: model evaluation, adversarial testing, incident reporting to the Commission, and cybersecurity measures.
The European AI Office, established within the European Commission, became the primary supervisory body for GPAI models from August 2025.
Phase 3: August 2026 — Full Application for High-Risk AI Systems (Annex III)
2 August 2026 is the critical date for most private-sector entities. From this date, the full AI Act rules apply to high-risk AI systems listed in Annex III that are not covered by Annex I (sectorally regulated products).
High-risk AI systems listed in Annex III include:
- Biometric systems (categorisation, emotion recognition).
- Systems managing critical infrastructure (energy, water, transport).
- AI systems in education — student assessment, examinations, admissions.
- AI systems in employment — recruitment, performance evaluation, promotions.
- AI systems in essential public and private services — credit scoring, insurance, healthcare.
- AI systems in law enforcement.
- AI systems in migration and asylum management.
- AI systems assisting the administration of justice.
Obligations for deployers of high-risk systems from August 2026 include:
- Conducting a Fundamental Rights Impact Assessment (FRIA) before deployment.
- Registering the system in the EU AI Act database.
- Ensuring human oversight of the system during operation.
- Monitoring the system's performance and reporting serious incidents to the supervisory authority.
- Retaining logs for a minimum of six months (or longer as required by law).
- Informing employees and affected individuals about the use of AI (Art. 26(6)).
Phase 4: August 2027 — High-Risk AI Systems in Annex I Products
Until 2 August 2027, an extended deadline applies to high-risk AI systems that form part of products regulated by EU sectoral legislation — those listed in Annex I of the AI Act. This covers:
- AI systems in medical devices (regulated under MDR/IVDR).
- AI systems in machinery, toys, marine equipment, and aviation products covered by relevant EU directives.
- AI systems in vehicles and automotive components.
This extended deadline acknowledges that these systems already undergo certification processes under existing sectoral regulations — the AI Act aligns with these established cycles.
Providers vs. Deployers: Different Obligation Scopes
The AI Act distinguishes two main categories of entity with different obligations:
- Providers: Entities that develop or commission the development of an AI system placed on the EU market. They bear the greatest burden — technical documentation, conformity assessment, CE marking (for high-risk systems), database registration.
- Deployers: Entities using an AI system in a professional context. Their obligations are narrower but significant: human oversight, registration (for Annex III systems), FRIA (where the system has a material impact on individuals' rights), and transparency towards employees and affected persons.
Law firms and financial companies that use third-party AI tools (such as AI legal assistants or credit scoring systems) act as deployers — not providers. This does not, however, exempt them from verifying that the tools they use comply with the AI Act on the provider side.
Penalties and Enforcement
The AI Act introduces an administrative penalty system similar to GDPR:
- Up to €35 million or 7% of global annual turnover — for violations of the prohibited practices prohibitions or for providing false information to authorities.
- Up to €15 million or 3% of global annual turnover — for other violations of the high-risk system provisions.
- Up to €7.5 million or 1.5% of global annual turnover — for supplying incorrect, incomplete or misleading information to authorities.
For SMEs and startups, penalties are calculated with reference to a lower absolute cap. National supervisory authorities must be designated by each Member State by 2 August 2025.
Action Plan: What Organisations Should Do Now
Regardless of which phase is most relevant to your organisation, the following steps are worth taking immediately:
- AI systems inventory: Identify all AI systems used or planned in the organisation — both proprietary systems and third-party SaaS tools.
- Risk classification: Assess whether each identified system falls into the prohibited, high-risk, limited-risk, or minimal-risk category.
- Supplier verification: Request from AI tool providers documentation confirming AI Act compliance (especially for high-risk systems).
- Staff training: Ensure employees have adequate AI literacy (obligation since February 2025) — start with basic training on the AI Act framework and human oversight principles.
- Website review: Verify that the organisation's website contains required information about AI use — AI chatbots must be clearly labelled, and the privacy policy should include a clause on automated decision-making.