Series
Blogs
Series
Blogs
As part of its digital reform package, the European Commission has proposed a new Directive to harmonise certain national non-contractual civil liability rules (“AI Liability Directive”).
The proposed AI Liability Directive was issued at the end of September 2022 and is closely linked to other pieces of EU legislation, in particular the proposed AI Act to which the AI Liability Directive consistently refers (the “AI Act”) as well as the Product Liability Directive. An analysis of the proposed AI Liability Directive in the context of product liability law can be found here.
While the AI Act has a preventive scope (i.e., avoiding AI-related damage from occurring), the proposed AI Liability Directive has a compensatory scope (i.e., recompensing those that have nonetheless suffered damage).
The purpose of the AI Liability Directive is to ensure that persons claiming compensation for the damage caused by an AI system enjoy similar protection as those incurring damage from other products. The Commission believes a new and specific regulatory framework is required due to the specific characteristics of AI, such as opacity, autonomous behaviour, complexity and limited predictability, all of which challenge the application of existing liability rules.
However, to avoid encroaching upon national civil liability rules any more than necessary, the proposed AI Liability Directive focuses on two specific tools, namely by the introduction of: (i) specific rules on disclosure of evidence; and (ii) a new rebuttable presumption of causality.
The proposed AI Act qualifies certain AI systems as ‘high-risk’. These are AI systems used in critical infrastructures, safety components of products and certain essential private and public services. The AI Act provides for specific documentation and logging requirements for high-risk AI systems.
The proposed AI Liability Directive applies additional obligations to these ex ante requirements by providing that national courts must be empowered to order the provider of such high-risk AI system (as well as certain other persons who can be in possession of relevant information) to disclose relevant evidence (or to preserve information) at its disposal. The disclosure mechanism works as follows:
The definitions of the AI Act, to which the AI Liability Directive refers, are still going through the EU’s legislative process. The narrower definition of the notion of ‘AI system’, currently advocated by some members of the European Parliament, would limit the scope of the AI Liability Directive, whereas a more broad and inclusive definition would lead to wide applicability of the new liability rules.
The second tool employed by the Commission to alleviate the burden of proof for persons harmed by AI systems, is a rebuttable presumption of a causal link between: (i) the fault of the defendant; and (ii) the output produced by the AI system (or the failure of the AI system to produce an output).
This presumption is intended to address the difficulties in providing evidence that a specific input for which the potentially liable person is responsible has caused a specific AI system output that has led to the damage. It is therefore limited in scope but still very valuable for claimants in liability cases related to AI.
For the presumption of causality to apply, three conditions must be fulfilled:
The proposed Directive provides for a few additional limitations to the application of this presumption. For example, in the case of a claim for damages concerning a standard (not ‘high-risk’) AI system, the presumption only applies where the national court considers it excessively difficult for the claimant to prove the causal link. With regard to ‘high-risk’ AI systems, on the other hand, the presumption does not apply when the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link.
Finally, the presumption of causality is rebuttable. Nonetheless, it will likely be difficult for a defendant to provide evidence sufficient for such a rebuttal. This would require either evidence of a negative fact (i.e. that the breach of a duty of care did not cause the harm suffered) or evidence that the harm is the result of another cause (which will require a full view on the facts by the defendant).
A few additional elements are notable: