Do you develop or operate software or AI? Liability for damages falls on you too

Pavel Čech 22.11.2024
 

Software and artificial intelligence in its various forms are increasingly integrating into people's everyday activities, both in their work and private lives. From smartphones, to algorithms that recommend the ideal employee, to self-driving cars. With the massive expansion comes the drive to regulate software to ensure that its use brings more benefits than risks. That's why the European Union is coming up with new directives to make it clear who will be liable for damages caused by defective products, including software and AI-driven technologies. In this article, we look at who will be affected by the regulation, what obligations it brings, and when we will see the new rules in practice.

Do we need more guidelines?

Software can do things we never dreamed of just a few years ago. But what if intelligent software or AI makes a mistake that harms someone else? It could threaten a range of fundamental human rights, including interference with life, health or the right to equal treatment. We can hardly fine an algorithm, we can hardly demand compensation from software or AI. Clearly, regulation is needed.

The EU has prepared two new directives on this matter: the Product Liability Directive and the Artificial Intelligence Liability Directive.

The new rules are designed to help people and businesses find those who are liable while providing some safeguards for businesses that use software and AI.

The Product Liability Directive was approved on 23rd October 2024 and Member States must adopt the necessary legislation to comply with the Directive by 9th December 2026 at the latest. While the AI Liability Directive is still going through the EU legislative process, we can expect it to take effect in a similar timeframe.

The Product Liability Directive will bring clear rules

The new Directive replaces the original 1985 Directive and adapts to developments in modern technology. The aim of the Directive is to change the product liability rules to make it effective in today's digital age and to provide legal certainty and protection for those claiming damages.

Software and digital production files are now included in the definition of products. However, free and open-source software is an exception if it has been developed or supplied outside the scope of commercial activity. This change and the non-inclusion of free software are intended to ensure that damage caused by technology is compensated for without compromising innovation and development using free software.

The Directive makes it easier for the victim to prove that they have been damaged as a result of the use of defective software. In particular, it is intended to provide effective tools to identify potentially liable parties and to gather relevant evidence for a claim for damages. Courts will thus be able to order persons sued for causing damage by software to disclose evidence if the injured party proves the validity of their claim.

If the defendant fails to disclose relevant evidence, it is automatically assumed that the software is defective. Likewise, the software is presumed to be defective if the claimant demonstrates that it does not meet mandatory security requirements set forth by law or that the damage was caused by an obvious defect in the software during reasonably foreseeable use or under ordinary circumstances.

An automatic presumption of software defectiveness also arises if the claimant has excessive difficulties in proving that the software is defective or if the claimant demonstrates that it is likely that the software is defective.

But it also works the other way around. So, if the defendant needs evidence to contradict the claim for damages, the court can order the claimant to disclose the evidence.

Important: The new Directive will only apply to products placed on the market or put into service after 9th December 2026 and products placed on the market or put into service before that date will continue to be governed by the previous Directive 85/374/EEC.

We closely monitor all European regulations and ensure that our clients' business is always DSA, P2B and DAC7 compliant. We can also look at your platform.

The Directive will affect many subjects

The new rules apply not only the manufacturer of the defective product or its part, but also the importer of the defective product, the manufacturer's authorised representative or the fulfilment service provider. In addition, any person who substantially modifies a product and places it on the market may be liable under the new Directive. In the event that it is not possible to identify a liable subject established in the Union, the responsibility lies with the distributors of the products or the providers of online platforms under the conditions provided for by law.

The Directive provides for several exemptions from liability. One of these exemptions is proving that the defect occurred after the product was placed on the market. However, this exemption from liability for defects does not apply to damage caused by the software, including its updates or, conversely, the lack of updates.

Artificial Intelligence Liability Directive

The Second Directive provides specific rules for disclosure of evidence of damage caused by AI. As with the Product Liability Directive, the court may order disclosure of relevant evidence about specific high-risk AI systems↗ , where they are suspected of causing harm and where the claimant has unsuccessfully made all attempts to gather evidence themselves.

Due to the complexity and sometimes autonomy of smart devices or software, it is often difficult to prove exactly what caused the damage. That is why the Directive introduces the so-called rebuttable presumption - if the defendant fails to provide the required evidence or fails to comply with the duty of care, the burden of proof shifts back to the defendant. The defendant then has to prove that it did not cause the damage and thus rebut the presumption.

The rebuttable presumption may be invoked if all of the following conditions are met: 

a) the claimant has demonstrated that the defendant was at fault in failing to exercise a duty of care as required by the law,

b) where it can be considered likely that fault has affected the output produced by the AI; and

c) if the claimant proves that the AI output led to the damage.

The Directive also sets out special conditions for high-risk AI systems that specify the first statutory condition.

Importantly, the first condition (breach of duty of care) will be presumed for providers and users of high-risk AI systems if they fail to comply with the requirements placed on them under the AI Act , for more precise definitions.

The Directive is without prejudice to the rights a person has under the Product Liability Directive. It also does not affect existing rules governing liability conditions in the transport sector, the rules laid down by the Digital Services Act (DSA)↗ and does not cover criminal liability.

Do you happen to fall into the category of high-risk AI systems? Reach out to us and we'll find out together.

What if AI fails?

We can see the impact of the new rules with this simple example: Company X implemented a remote biometric identification system at a large stadium to speed up fan check-in and prevent troublemakers from entering. But during a match, the AI flagged Mr Novak as an unwanted visitor. Based on this assessment, security guards callously escorted him out of the stadium, causing him not only public embarrassment but also minor injuries. Mr. Novak lost his expensive tickets and sports entertainment and decided to sue the company.

Thanks to the new rules, Mr. Novak's situation is made easier – he only must prove that his behaviour was not such as to make him an undesirable person and that the damage (injuries, lost tickets, loss of honour) occurred. The court will then order company X to disclose the documentation of the AI system. It turns out that there was a lack of transparency in the algorithm's decision making and inadequate human oversight – security blindly trusted the AI without any verification.

As remote biometric identification is one of the high-risk AI systems, the operator had clear obligations under the AI Act. Failure to do so has a direct impact on misidentification and therefore damage. As a result, a rebuttable presumption of liability is triggered – Company X must prove that it did not make a mistake. If it fails to do so, Mr Novak is likely to succeed in his claim for damages.

This example illustrates the importance of transparency, documentation and human oversight in high-risk systems. Without these, operators can easily find themselves in a situation where liability is directly attributed to them and defences become difficult.

How much time do manufacturers, developers and operators have?

Member States must implement the directives in their laws within two years. In the case of the Product Liability Directive, it is already certain that the new rules will apply from 9th December 2026.

And if the AI Liability Directive is approved before the end of 2024, the new rules could also apply from the end of 2026.

Technology: more opportunities, more threats

The new regulations seek to strike a balance between protecting victims of harm caused by technology and the interests of the businesses, developers and manufacturers who work with the products. On the one hand, the directives make it easier for victims to obtain compensation, while on the other hand they offer businesses safeguards, such as the possibility to challenge liability claims on the basis of the rebuttable presumption under the AI Liability Directive. The main objective of the Directives is to increase public confidence in technology and AI, to promote their safe use across the EU and to prevent their potentially dangerous abuse.

Autor

Pavel Čech ↗

Nejraději pomáhám těm, kdo potřebují skloubit právo s technologiemi. Ve světě IT a software se totiž cítím jako ryba ve vodě. Už od mala mě přitahovaly nové technologie a promítání science fiction do našeho světa. Až při studiu na vysoké škole jsem si však uvědomil, kolik etických a právních překážek je nutné překonat před jejich uvedením do reality.

Pavel Čech

 

Napište nám