Publication
International arbitration report
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Global | Publication | February 2022
In October 2021, the European Commission launched a public consultation (Consultation) on adapting liability rules to the digital age and artificial intelligence (AI). The Consultation, which was closed on 10 January 2022, received feedback from 189 respondents with regard to: (1) confirming the relevance of issues identified in the course of evaluating the Product Liability Directive in 2018 (in particular, in relation to its application to products in the digital and circular economy), as well as gathering views on how to improve the Directive; and (2) collecting information on the need and possible ways to address the specific challenges of AI in terms of the Directive and national liability rules.
With regard to European and national product law, businesses face multiple challenges when it comes to assessing the extent of their liability for products and services, particularly in the digital sphere. Emerging technologies as well as outdated and unclear EU and national liability rules and divergent national approaches put producers, service providers and operators as well as consumers in a state of legal uncertainty. The Consultation seeks to address these concerns.
In this article we consider the implications of the Consultation (and a related impact assessment) in terms of AI, the digital age and product liability more broadly.
The existing EU product liability framework is based on the Product Liability Directive (85/374/EEC) (Directive or PLD), enacted in 1985, and implemented by national liability rules. The technology-neutral provisions of the framework aim to:
In the European Commission’s view, changed market realities (through transformation of the digital economy, the increasing importance of AI, and transition to a circular economy) make it necessary to revise and adapt liability rules. For example, the security of many products and services increasingly depends not only on their design and production, but also on software updates, data flows and algorithms, and it is unclear whether regulations of the EU Member States on liability for defective products still offer sufficient legal certainty and consumer protection in such circumstances.
In 2018 the European Commission's report on the application of the Product Liability Directive (COM(2018) 246 final) (Evaluation) identified challenges for the application of liability rules to the circular economy and new emerging technologies, and pointed to a need for them to be revised or clarified. The current Consultation seeks to confirm the relevance of the issues identified by the Evaluation and the Inception Impact Assessment (Ares(2021)4266516) (Impact Assessment) published on 30 June 2021.
Proposed reforms as part of a wider strategy in relation to AI With a focus on AI, the Commission’s Consultation is embedded within a wider, staged approach to developing an ecosystem of trust for AI within the EU:
|
Section I of the Consultation concerns the Directive. Due to outdated concepts within the Directive, its application is considered to be increasingly problematic – for example, in relation to:
Section II of the Consultation explores problems linked to certain types of AI. AI can make it difficult to identify the potentially liable person, to prove that person is at fault, to prove that there is a “defect”, and to demonstrate a causal link between the defect and the damage. In Section II the Commission seeks stakeholder’s views on various policy options, including in relation to:
Digital content, software and data are essential constituents for the safe functioning of many products. However, it is often unclear to what extent such intangible elements can be classified as “products” under the Directive.
In accordance with Article 2 PLD, “product” means all movables even if incorporated into another movable or into an immovable, and electricity.1 There are definitional problems with the term “product” because of the divergence in national definitions (arising from the transition of the Directive into national law of the Member States).
The scope of what is covered by the definition is also problematic. For example:
Such differences in approach to software become increasingly illogical, given that much software is now embedded within the products, including those within which AI may be embedded. Moreover, they show that what was once considered appropriate under the Directive in 1985 does not sit well with today's distribution methods for software.
In order to solve the problem of delineation between what is a “product” and what is a “service”, the Impact Assessment proposes extending strict liability rules to cover intangible products. As a consequence, digital content or software, irrespective of whether it takes tangible or intangible / digital form, that cause physical or material damage would be covered by the product definition.
The Impact Assessment also proposes that the scope of damages in Article 9 PLD (currently providing only for physical or material damage) could also:
Changes to value chains (in particular, with regard to online market places) are also addressed by the Impact Assessment:
A prerequisite for the liability of the producer is that there be a defective product. According to Article 6 PLD, if a product does not provide for the safety that a person can reasonably expect at the time when the product was put into circulation, it is deemed to be defective.
How might this requirement relate to software? Although completely error-free software is rare (and perhaps impossible nowadays), the Consultation suggests that applying concepts of what is a “defect” is still workable in relation to software.
However, in respect of AI-equipped products that continuously learn and adapt while in operation, the Consultation notes that it is unclear whether unpredictable outcomes leading to damage could be treated as “defects” under the PLD:
Product liability starts at the time a product is put into circulation, i.e. “when it is taken out of the manufacturing process operated by the producer and enters a marketing process in the form in which it is offered to the public in order to be used or consumed.”3
In contrast to conventional products, various questions could arise when it comes to software-driven or self-adapting and learning products (for example, subsequently installed software and updates):
The Consultation suggests that the physical release of software-based products from the sphere of the manufacturer is no longer the correct way of establishing that something has been put into circulation. An appropriate way of determining whether a product has been “put into circulation” would have to consider the scope and functionality of the update.
With regard to AI-equipped products and AI-based services:
Because whether something is defective is to be determined at the moment it is put into circulation, the Consultation raises the possibility that clarifications to existing law could be made to address differing product periods of development and learning (say, in relation to an AI product that includes machine learning or training). For such periods, the Consultation suggests that it might be preferable to establish strict liability (on the basis that it might be more effective than fault-based liability often currently implemented by national law).
The Consultation also addresses the determination of liability in connection with business models where products are repaired, recycled, refurbished or upgraded after they had been put into circulation. According to the Evaluation, the Directive remains unclear on about who should be liable for defects from aforementioned changes.
Where should the burden of proof lie in establishing strict liability of the producer? The complexity of certain products (particularly digital products, such as AI) makes it very challenging for injured parties to identify the producer responsible, and to prove that a defect caused the damage they suffered.
The Impact Assessment proposes several options to ease the evidential burden and to reduce obstacles to getting compensation:
The specific characteristics of AI (such as autonomous behaviour, continuous adaptation, limited predictability and opacity) make it difficult to determine one’s liability with sufficient certainty and to get compensation for damage.
Injured parties may not have sufficient technical information about AI products and services. For example, understanding output production is very limited with certain opaque AI systems. As a consequence, it is particularly difficult and costly for injured parties to identify and prove the fault of a potentially liable person, or the causal link between that fault/defect and the damage suffered.
Currently, proving that “the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered” releases a producer from liability pursuant to the so-called “development risk defence” (commonly referred to as a “state of the art” defence in US jurisprudence) under Article 7 PLD. This defence does not take into account the many issues presented by continuous adaption of software. Under a risk-based approach, the Impact Assessment suggests that risks related to AI-equipped systems and resulting uncertainty could be transferred to the producer:
Several matters regulated in the Directive are left to the discretion of Member States, including:
The PLD does not preclude other causes of action under national law that are outside the scope of the matters it regulates, provided that those national laws are consistent with the operation of the PLD.4
In adapting the EU liability framework, the views of the European Court of Justice (ECJ), as expressed in its recent case law, will have to be taken into account. In particular, rather plaintiff-friendly case law implicitly has:
As a consequence, manufacturers have had to demonstrate that their products were compliant (rather than the plaintiffs having to demonstrate the opposite). In addition, national courts may apply diverging ad hoc solutions (e.g. by alleviating the burden of proof); and Member States may attempt to address the resulting legal uncertainty on national level.
These outcomes could lead to further fragmentation of liability rules across the EU for damage caused by AI. A lack of harmonised rules could lead to obstacles in the internal market, and a possible lower level of protection of the consumer. Such concerns are reflected in the following options outlined in the Impact Assessment, referred to in the Consultation:
Although the current provisions of the Product Liability Directive are intended to be technology-neutral, the need for an adaption is evident. Against the background of the roadmap’s findings, the extent of the proposed increase in liability is remarkable. Proposing alleviation or reversal of burden of proof might be able to fundamentally change the Directive’s liability regime. As European Commission adoption is planned for the third quarter in 2022, it will be interesting to observe which liability regime will finally find a majority.
Publication
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Publication
The 28th Conference of the Parties on Climate Change (COP28) took place on November 30 - December 12 in Dubai.
Publication
Miranda Cole, Julien Haverals and Emma Clarke of our Brussels/ London offices are the authors of a chapter on procedural issues in merger control that has been published in the third edition of the Global Competition Review’s The Guide to Life Sciences. This covers a number of significant procedural developments that have affected merger review of life sciences transactions.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023