EU AI Liability Directive: What Businesses Using AI on Their Website Face
How the AILD changes liability for AI-caused harm — the causality presumption, right to disclosure of documentation, and the link to the EU AI Act.
The AI Liability Directive — What It Is and Why It Matters
In October 2022, the European Commission published the draft Directive on Liability for Artificial Intelligence (AI Liability Directive — AILD). The Directive complements the EU AI Act by governing civil liability for damages caused by AI systems. Unlike the EU AI Act, which focuses on ex ante requirements (before market placement), the AILD addresses ex post claims — when damage has already occurred.
As of early 2026, the directive remains in the legislative process (trilogue between the Parliament, Council and Commission). However, its key mechanisms are well established and organisations should prepare for its entry into force.
The Key Mechanism: Presumption of Causality
The AILD's most significant innovation is the rebuttable presumption of a causal link. This applies where:
- The defendant breached a duty of care under EU or national law — including requirements of the EU AI Act.
- Based on the circumstances of the case, it is likely that the breach influenced the output of the AI system.
- The claimant has shown that the output of the AI system caused the damage.
In plain terms: if your organisation failed to meet EU AI Act requirements — for example, failed to maintain logs or failed to ensure human oversight of a high-risk system — and an AI system caused harm to someone, a court may presume that the failure was the cause of the harm. The burden of rebutting this presumption falls on the defendant.
Right to Disclosure of Evidence
The AILD introduces the right of an injured party to request disclosure of documentation and data relating to the AI system. Courts will be able to order producers and operators of AI systems to disclose:
- Technical documentation of the AI system (corresponding to EU AI Act Art. 11–13 requirements).
- Logs and operational records (where the system generates them).
- Information on training data, to the extent necessary to assess the claim.
- Results of risk assessments and conformity evaluations.
If the defendant refuses to disclose documents without justification, the court may apply anegative inference — treating the withheld documents as supporting the claimant's case.
Who Bears Liability?
The AILD does not replace existing civil liability regimes — it overlays them. Liability may attach to:
- AI system providers: Entities that developed the AI system and placed it on the market. They may be liable under the revised Product Liability Directive (PLD), with AILD supplementing PLD in areas it does not cover.
- Operators/deployers: Entities using an AI system in a professional context. The AILD expressly governs operator liability — including for harm resulting from improper supervision or deployment of the system.
- Consumer users: Private users of AI systems are generally not defendants under the AILD, but may be claimants where harmed by AI systems operated by other entities.
The EU AI Act Link: Non-Compliance = Liability Risk
The AILD and EU AI Act are designed as complementary instruments. Practical consequences:
- A breach of EU AI Act obligations (no technical documentation, no logs, no human oversight) automatically creates the basis for the causality presumption under the AILD.
- High-risk AI systems (EU AI Act Annex III) will face particular scrutiny in civil proceedings — required documentation becomes key evidence.
- The EU AI systems register (Art. 71 EU AI Act) will enable injured parties to identify the system's provider and request disclosure.
In short: EU AI Act compliance is not only a regulatory requirement — it is a legal shield in potential damages proceedings.
What Does This Mean for Your Website?
If your organisation uses AI systems on its website — chatbots, recommendation systems, customer risk assessment tools — the AILD imposes information and documentation obligations:
- AI transparency on the website: Users must know they are interacting with an AI system (EU AI Act Art. 50 requirement, reinforced by the AILD's causality presumption where transparency is lacking).
- Technical documentation: Every high-risk AI system used on the website must have documented operation, training data, and oversight mechanisms. In a dispute, a court may order its disclosure.
- Logs and audit trail: AI systems should generate operational logs (who, when, with what outcome) enabling events to be reconstructed in case of a claim.
- Privacy policy: Should include information on automated decision-making and profiling, the legal basis (GDPR Art. 22), and the right to an explanation of the decision.
Timeline and Current Status
As of March 2026: the AILD remains in trilogue. The European Parliament adopted its position in December 2023, broadening the scope with additional consumer protection measures. An agreement is expected in 2026. Once adopted, Member States will have two years to implement the directive into national law.
Organisations should begin preparing for the AILD now by:
- Taking an inventory of AI systems used in operations and on the website.
- Ensuring EU AI Act compliance — this minimises AILD risk.
- Implementing logging for AI systems that make decisions affecting users.
- Updating privacy policies with AI and automated decision-making clauses.
- Consulting insurers about extending professional liability policies to cover AI risk.