EU AI Act: What Every DPO Needs to Know in 2026

A comprehensive guide to DPO obligations under the EU AI Act — implementation, transparency requirements, and risk assessment.

What Is the EU AI Act and When Does It Apply?

Regulation (EU) 2024/1689 of the European Parliament and of the Council, known as the EU AI Act, entered into force on 1 August 2024. It is the world's first comprehensive law regulating AI systems — based on a risk-based approach that classifies AI systems into four levels: prohibited, high-risk, limited risk, and minimal risk.

Full provisions became applicable from 2 August 2026. For Data Protection Officers (DPOs), the EU AI Act is not merely an IT topic — it has direct implications for data management processes, risk assessment, and documentation.

The DPO's Role in EU AI Act Compliance

Although the EU AI Act does not explicitly name DPOs as responsible for AI compliance, in practice the DPO's natural responsibilities overlap significantly with the new regulatory requirements. Article 26 imposes obligations on deployers that require strong support from the data protection team:

  • Impact assessment: Art. 26(9) requires a fundamental rights impact assessment before deploying high-risk AI systems — analogous to a DPIA under GDPR.
  • AI system registry: Art. 49 requires registration of high-risk AI systems in the EU database. The DPO should oversee this process.
  • Coordination with the supervisory authority: For AI systems processing personal data, the DPO is the natural liaison between the national DPA and the internal AI team.

AI System Inventory: How to Identify AI Tools in Your Organization

Every DPO's first step should be mapping the AI systems used in their organization. A practical approach involves three stages:

  1. Departmental survey: Send a structured questionnaire to each department asking about tools containing AI components (chatbots, recruitment systems, credit risk tools, customer scoring).
  2. Procurement and IT audit: Review SaaS contracts and licenses — many modern business tools (CRM, HR, marketing automation) contain AI components that may not be obvious at the business level.
  3. Risk classification: For each identified AI system, determine the risk category under Annex III of the EU AI Act (high-risk systems) and Article 5 (prohibited systems).

Transparency Requirements (Art. 50): Chatbots and AI-Generated Content

Article 50 of the EU AI Act introduces direct obligations for operators of AI systems interacting with people. The most important requirements are:

  • AI systems conducting conversations (chatbots) must inform users they are communicating with an AI — unless this is obvious from the context.
  • Synthetically generated content (deepfake audio/video) must be labeled as AI-generated.
  • Emotion recognition systems must inform users about their operation.

For DPOs, it is critical to verify that the organization's website and marketing communications comply with these requirements — including customer service chatbots, automated email responses, and generated content.

High-Risk AI Systems: What DPOs Must Document

If the organization deploys or uses high-risk AI systems (Annex III), the DPO should oversee the following documentation:

  • Instructions for use: The provider is required to supply detailed technical documentation. The DPO should verify its completeness.
  • Logs and records: Art. 12 requires automatic event logging — the DPO should ensure logs are retained according to the data retention policy.
  • Post-deployment monitoring results: Art. 72 imposes an obligation to actively monitor AI system performance after deployment. These reports should be integrated into the record of processing activities.

DPIA for AI Systems under GDPR Article 35

AI systems processing personal data most commonly require a Data Protection Impact Assessment (DPIA) under Art. 35 GDPR. Mandatory DPIA criteria include:

  • Systematic and extensive evaluation of personal aspects based on profiling.
  • Large-scale processing of personal data.
  • Systematic monitoring of publicly accessible areas.

In practice, any AI system classified as high-risk by the EU AI Act that processes personal data almost automatically meets the DPIA criteria. The DPO should develop a DPIA template tailored to AI systems, including analysis of algorithmic bias and model accuracy assessment.

DPO Practical Action Plan — 5 Steps

  1. Step 1 — AI Mapping (months 1–2): Conduct a full inventory of AI systems in the organization. Use structured surveys and analyze vendor contracts.
  2. Step 2 — Risk Classification (months 2–3): Determine the risk category for each system under the EU AI Act. Consult a lawyer for borderline cases.
  3. Step 3 — DPIA and Impact Assessment (months 3–4): Conduct DPIAs for high-risk systems. Address AI-specific concerns: bias, explainability, data poisoning.
  4. Step 4 — Policy Updates (months 4–5): Update privacy policy, terms of service, AI policy, and staff notification procedures.
  5. Step 5 — Ongoing Monitoring: Establish regular AI system review procedures — recommended every 6 months or at any significant model change.

How Automated Scanning Supports the DPO

One practical challenge for DPOs is verifying that visible elements of the organization's website comply with EU AI Act requirements — including chatbot disclosures, information clauses, and AI policies. Automated compliance scanning tools such as Juralex Audit can detect missing or incorrect elements on a website without the need for manual review of every sub-page. This is particularly valuable for DPOs managing multiple domains simultaneously — for example in corporate groups or companies with extensive subdomain networks.