Civil liability regime for artificial intelligence  
2020/2014(INL) - 20/10/2020  

The European Parliament adopted by 626 votes to 25, with 40 abstentions a resolution containing recommendations to the Commission on a civil liability regime for artificial intelligence (AI).

Parliament called on the Commission to propose a regulation laying down rules for the civil liability claims of natural and legal persons against operators of AI-systems.

Liability and artificial intelligence

The Product Liability Directive has proven its effectiveness as a tool to obtain compensation for damage caused by a defective product for more than 30 years, but it should nevertheless be revised to make it better adapted to the digital world and able to meet the challenges posed by emerging digital technologies.

Members considered it  necessary to ensure maximum legal certainty throughout the liability chain, including for the producer, operator, injured parties and any other third parties, in order to respond to the new legal challenges created by developments in artificial intelligence (AI) systems. Civil liability rules for AI should strike a balance between protecting citizens and supporting technological innovation.

Scope of application

The requested proposal for a Regulation should apply on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss.

Parliament considered that operator liability rules should apply to all types of AI system operations, regardless of the location of the operation and whether it is of a physical or virtual nature.

Objective liability for high-risk AI systems

Under the requested proposal, the operator of a high-risk AI-system should be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system. It should not be able to exonerate itself from liability by claiming that it acted with due diligence.

Although high-risk AI technologies are still rare, operators of high-risk AI systems should take out liability insurance similar to that for motor vehicles. The compulsory insurance regime for high-risk Mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation. Uncertainty regarding risks should not make insurance premiums prohibitively high and thereby an obstacle to research and innovation.


Under the requested regulation, an operator of a high-risk AI-system that has been held liable for harm or damage under this Regulation should compensate:

- up to a maximum amount of EUR two million in the event of the death of, or in the event of harm caused to the health or physical integrity of, an affected person, resulting from an operation of a high-risk AI-system;

- up to a maximum amount of EUR one million in the event of significant immaterial harm that results in a verifiable economic loss or of damage caused to property.

Civil liability claims based on injury to life, health or limb should be subject to a special limitation period of 30 years from the date on which the injury occurred. This period would be 10 years from the date when the property damage occurred or the verifiable economic loss resulting from the significant immaterial harm.

Fault-based liability for other AI-systems

The operator of an AI-system that does not constitute a high-risk AI-system should be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system. The operator should not be liable if he or she can prove that the harm or damage was caused without his or her fault.

Monitoring developments

The Commission is called on to work closely with the insurance market to develop innovative insurance products that can fill the insurance gap. Any future changes to the Regulation should go hand in hand with the necessary revision of the Product Liability Directive, in order to revise it in a comprehensive and coherent manner and to ensure the rights and obligations of all parties involved throughout the liability chain.

Parliament recommended that an exhaustive list of all high-risk AI systems be set out in an annex to the proposed Regulation. In view of rapid technological developments, the Commission should review this annex at least every six months and, if necessary, amend it by means of a delegated act.