Posts

Can the law keep up with technology? Artificial Intelligence (AI): Proposed European Union (EU) product liability rules

 

The rapid development of Artificial Intelligence (AI) technology leaves us to question whether the English Civil Liability rules of contractual, and extra-contractual liability for loss and damage caused by AI, are fit for purpose. The EU has taken the game of AI liability into its own territory with its proposed laws on both product liability and product safety fit for the AI age.

On 3 February 2023, the European Council published a draft compromise proposal of the revised Product Liability Directive (“Directive”). Its provisions will come into force in EU Member States 12 months after the Directive is finally approved in the EU legislative process. This draft Directive will revise the original Product Liability Directive (PLD).

The European Parliament has just adopted, at first reading, the consolidated text of a Regulation on Product Safety and it is expected that the Council, will formally adopt the Regulation at first reading without further amendments on 25 April 2023. The Regulation will then be published in the Official Journal of the European Union. Its provisions will apply 18 months after it enters into force. This regulation revises the General Product Safety Directive (GPSD).

The implementation of the PLD and the GPSD, both of which were implemented into UK Law as retained matters at the time of Brexit will leave a divergence of rules between the EU and the UK after the dates of their implementation, as the UK has not announced any intention to follow the EU legislative initiatives in respect of liabilities arising in the UK. Nevertheless, any UK based business looking to supply goods and related services in the EU (as it is the largest and closest export market) will need to comply with the new PLD and the GPSD in those markets, in practice, it may need in the UK domestic market to operate in a very similar manner to manage and mitigate its business risks as it would do in the EU. In all practical terms the UK has become a rule taker, rather than a rule maker.

 

Deficiencies of the PLD

The PLD when implemented 40 years ago, provided a system of strict liability whereas the national regimes provided for fault-based liability rules which require the claimant to establish fault, damage, and causation. The PLD did not address issues of emerging digital technologies(EDTs), online shopping, connected products and AI that did not exist at that time.

The EU Commission evaluation of the PLD specified developments in technology posed challenges for the application of existing liability rules:

• The PLD was not designed to address the intangibility of digital products, their dependence on data, their complexity and connectivity.
• Developments in technology have meant that definitions that had previously been clear-cut needed reassessment, including the terms “product”, “producer”, “defect” and “damage”.
• The development of connected products and related services required clearer attribution of damage between businesses and consumers.
• The PLD was unclear on who should be liable for defects resulting from changes in products after they are put into circulation (for example, remote updates).
• The burden of proof (that is, the need to prove the product was defective and caused the damage suffered) was challenging for victims in complex cases.

 

EU: proposal to revise and replace the PLD

While the new PLD retains the core concepts of the existing Directive, including the principle of strict liability for damage caused to a consumer by a defective product, including the core liability test of “Whether the Product provides for the safety which the public at large is entitled to expect”, the proposed new PLD includes the following provisions:-

• Establishing new rules of non-fault-based liability for damage caused by AI systems
• Adding to the scope of products: Software,3D printing, apps, operating systems and related digital services.
• Making software developers and providers of digital services liable for defects in their software and services that cause the product as a combination of goods and services to cause
damage . Manufactures will also become liable for post circulation modifications causing product defects, as will remanufacturers, who make substantial modifications to Products.
• Making interconnectedness of product/services/software etc. be relevant in assessing product defects.
• Products and connected services attribute damage by Businesses to Consumers.
• Reversing the burden of proof in complex cases involving AI, where liability would otherwise be hard to prove.
• Imposing an obligation on manufacturers to disclose evidence.
• Extending the limitation period for claims from 3 years after first distribution.
• Extending damages for: –
o Harm to psychological health including VR products.
o Data corruption.
o Death, personal injury and property.

The EU Commission has also published a proposal for a directive on adapting non-contractual civil liability rules for AI alongside its proposal for a revised PL D (AI Liability Directive). The objective is to make it easier for claimants to bring claims for damage caused by the fault of an AI system under the national laws of member states.

 

EU: proposal to revise and replace the GPSD

The new General Product Safety Regulation revises the GPSD which provides the preexisting EU legal framework for safety of non-food consumer products to the extent that there are no sector-specific provisions in other EU legislation, such as the EU harmonised legislation for medical devices, or where that sector-specific legislation does not apply to the specific safety aspect in question (in which case the GPSD fills the gap).

The GPSD obliges manufacturers, importers, and other producers to:

• Place only safe products on the market.
• Inform consumers of risks associated with the product.
• Ensure that products are traceable and to notify the relevant authorities if a recall is needed.

Member states may impose fines and other criminal or administrative penalties for breach.

The GPSD also established the EU Rapid Alert System, Safety Gate (formerly known as RAPEX), which allows member states to share information on potentially dangerous products and product recall exercises.

The GPSD is now over 20 years old. While it is still the most significant single piece of EU product safety legislation. However, its importance has been somewhat eroded by the increase in the number of sector-specific directives which set harmonised standards and conformity assessment requirements for product types. This has led to a fragmented regulatory landscape for product safety.

 

Deficiencies of the GPSD

The proposal for regulation was accompanied by a Working Document providing an executive summary of the impact assessment the Commission undertook as part of its evaluation of the GPSD in June 2020 . 2020.aluation concluded that the role of the GPSD as a safety net remained essential for consumer protection and Safety Gate had proven to be a success. However, the evaluation exposed several factors that called into question the effectiveness of some GPSD provisions, including:

• The GPSD did not address how new technology, such as AI or connected consumer products, can impact product safety. For example, a product may become dangerous by having insufficient cybersecurity protection, or a consumer’s personal security could be endangered if a third-party accesses information.
• The GPSD was ill-equipped to deal with the challenges posed by developments in e-commerce, including sales via online platforms. The product safety obligations of operators of online platforms were unclear, which affected consumer protection and created an uneven playing field between online and offline sellers. While some online marketplaces had signed up to voluntary commitments, others had not. There was a particular problem where consumers bought products from operators located outside the EU particularly if the trader was not represented on the EU market.
• The effectiveness of product recalls from consumers was low. This left too many dangerous products in consumers’ hands.
• The GPSD’s market surveillance provisions were not consistent with the updated market surveillance rules for harmonised products (typically identifiable through the CE marking), introduced by the 2019 Market Surveillance Regulation. The adoption of that Regulation, which applies to those sectors subject to harmonised measures, meant there was an uneven framework between products under harmonised EU rules and products that were not subject to such rules (which were covered by the GPSD). This meant that products subject to the GPSD were subject to less rigorous regulation than products subject to harmonised legislation. Market surveillance authorities lacked the appropriate tools under the GPSD to impose effective sanctions and products were difficult to trace throughout the supply chain. There was also considerable divergence in the approach to product safety risk assessment between member states.

The proposed new regulation seeks to address the above problems. The regulation will work in tandem with the Commission’s legislative proposals governing AI systems, machinery products and connected consumer products.

 

Summary of principal changes to the GPSD

The proposed regulation will, if implemented, introduce sweeping changes to the EU general product safety regime, with a significant increase in the obligations on actors throughout the supply chain and backed up by high fines for breach.

The principal changes proposed are:

• Online marketplaces. A discrete chapter of the proposed regulation is dedicated to online marketplaces. Online marketplaces will have to assume more responsibility in tackling the sale of dangerous products online.
• Online advertising. It will be necessary to publish safety warnings at the time of online advertising. Also, offering products for sale on an online marketplace using an official EU language will be a relevant consideration when considering whether an advertisement is targeted at EU consumers. This means the offer would need to contain information to identify the product and details of the manufacturer or responsible economic operator established in the EU.
• Emerging technologies. The proposed regulation expands the definitions of “product” and “safety” to encompass EDTs. The definition of “product” takes account of interconnectivity of products and the definition of “safety” takes account of the product’s cybersecurity features and its evolving, learning and predictive functionalities.
• Economic operators. The proposed regulation introduces a new definition of “economic operator”. This is the manufacturer, authorised representative, importer, distributor, fulfilment service provider or any other person who is subject to obligations in relation to the manufacturer or products, making them available on the market in accordance with the proposed regulation.
• Technical documents. Manufacturers will have to draw up technical documents in relation to all consumer products. This is already the case with harmonised products but less so under the GPSD.
• Testing. The responsible person will be required to conduct periodic sample testing of products placed on the EU market.
• Market surveillance. The proposed regulation will establish a consumer safety network to exchange information and co-operate with tracing and the recall of dangerous products. Online marketplaces will be obliged to register with a Safety Gate Portal and co-operate with market surveillance authorities.
• Notifications. Economic operators must issue a notification of an accident within two working days after becoming aware of the accident.
• Traceability. The Commission may request an economic operator to establish a system of traceability for certain products presenting a serious risk to health and safety of consumers.
• Product recalls. The proposed regulation contains several provisions designed to improve the effectiveness of product recalls within the EU, including:
• the recall notice must meet specific requirements (a description of the product including a photograph, the name and brand of the product, a clear description of its hazards, and so on). Terms like “precautionary”, “rare” and “voluntary” will be prohibited;
• the recall notice must include an instruction to stop use immediately; and
• the economic operator responsible for the recall must offer the consumer an effective, cost-free and timely remedy to either repair or replace the product or to provide a refund.
• Penalties. Market surveillance authorities will be able to impose fines for non-compliance of up to 4% of the annual turnover of the economic operator or online marketplace in the concerned member state(s).

Also, as a regulation rather than a directive, there should be greater consistency across member states.

 

Additional EU proposals for regulation of AI products include:-

• Machinery Products Regulation.
• Revision and replacement of General Product Safety Regulation.
• Connected consumer products.
• Cyber security of internet enabled products.
• Cyber resilience Act
• EU digital services Act
• Regulation of sustainable Products

 

Key takeaways of legislative developments in the EU regarding defective products and their safety: –

• More complex products with integrated AI services have developed faster than the law.
• The EU has a legislative plan for liability and regulation of AI products and services.
• Legislation to be implemented in the course of 2023-2026
• Other jurisdictions, including the UK, lag behind the EU legal developments.
• Barriers to entry to EU market will be caused by the legislative developments.
• Scope of products and services is evolving for which liabilities arise.
• Coping methods and strategies for risk management and resilience are evolving. Some AI systems being adopted as risk analysis tools, and management guides.

This article is based on European Product Liabilities (Butterworths) edited by Patrick Kelly and Rebecca Attree.

Rebecca Attree, Mediator, IPOS Mediation

Patrick Kelly, Partner, Layton ETL


Complicated dynamics and disputes in Family Business and how to resolve them

In the autumn of 2022, Mishcon de Reya welcomed a panel of experts to discuss the effective management of family centred disputes.

The Panel of experts

The panel was moderated by Rebecca Attree. Rebecca put her considerable skills to good use, skilfully guiding an interesting (and often entertaining) discussion on a variety of topics: from the early rumbles of discontent through to what happens once formal litigation proceedings have been issued. The panel comprised Mishcon de Reya’s Shona Coffer, a Partner in Private Commercial Litigation, Victoria Palmer-Moore, Managing Partner at Powerscourt and Sarah Middleton, a Partner in PwC’s Valuations practice.

The panel’s broad range of expertise enabled the discussion to address everything from what public communication strategy families should take to considerations around the valuation of family businesses.

The Inspiration for the discussion

The event was inspired by last summer’s hit film, House of Gucci. Unfortunately, the family dynamics portrayed in House of Gucci are not dissimilar to those occasionally encountered by the expert panel on a day-to-day basis. The panellists agreed that family disputes require a different approach to purely commercial disputes: more often than not, family disputes are played out in pre-action correspondence. Advisors recognise that, once you push the button on litigation, positions become increasingly entrenched making it harder to build bridges. From Rebecca’s perspective as a mediator, this can present a challenge for mediation as it is harder for a potential defendant to assess the merits of a claim before it is particularised in pleadings. However, the benefits of a pre-action mediation usually outweigh this issue.

Phases of a family business dispute

The panel discussed the various phases of a dispute and when it might be appropriate to obtain valuations. It was clear that it is important to consider the timing of a valuation (for example if there are allegations of the other party intentionally dissipating the asset during the course of the dispute). Similarly, important discussions are to be had around the actual value of a family business, both in terms of its reliance on a key person (and how much goodwill resides with that individual); and also in terms of its market value to a third party versus its “real value” to a family shareholder.

Victoria helpfully explained the importance of getting a proper communication strategy in place and – if possible – ensuring that everyone is on the same page. The potential influence of social media means that any strategy should, if possible, take into account all members of the family as well as members of its staff. Family members should be advised to resist the temptation to retaliate to comments made online and should, if possible, identify a single spokesperson to ensure a consistent message.

Speaking about dispute resolution, Shona was clear that litigators are always looking ahead to identify potential opportunities to settle proceedings before they reach court. This is particularly true in family disputes where confidentiality, around both the details and the outcome of a dispute, is often a primary consideration. Rebecca said that according to statistics, only 7% of mediations fail and they are obviously, therefore, an incredibly effective tool for dispute resolution allowing families a safe place to air a grievance whilst searching for a resolution that is both personally – and commercially – equitable. Often the agreement reached in an out of court settlement with terms that would not have been within the court’s power to order.

Session conclusion

The session ended with each of the panellists imparting a key piece of advice to an individual on the brink of a family dispute:
– Get an early grasp on the value of asset, the value at stake – and how volatile that value is likely to be over a lengthy dispute;
– Agree a comprehensive communications plan and game plan scenarios;
– Consider whether your client wants to see every letter exchanged between the lawyers – often the tone of this correspondence can unnecessarily exacerbate a dispute
– Take a deep breath and count to ten – do you really want to set the ball in motion with litigation or arbitration or is there hope of a resolution at mediation?


Artificial Intelligence (AI) and legal liability

Introduction 

The rapid development of Artificial Intelligence (AI) technology leaves us to question whether the English Civil Liability rules of contractual, and extra-contractual liability for loss and damage caused by AI are fit for purpose.

 

Failure or disruption of AI systems can lead to catastrophic risks and liabilities. The UK government’s policy paper on AI states that legal liability for AI outcomes must always rest with “an identified or identifiable legal person – whether corporate or natural”. This position resonates with the Law Commission’s recommendation that liability for self-driving vehicles shifts from driver to manufacturer and software developer. Hence, risk and liability allocation within the AI chain is important to establish liability in the event of a dispute.

 

Contractual liability

The central provisions of contract law are flexible enough to provide a workable framework of liability. However, those contractual rules that predate the development of artificial intelligence technology do not interact with the framework to provide sufficient protection for users of artificial intelligence. This is particularly because they do not provide consumers with sufficient mandatory protection rules, implied terms, and minimum service standards.

 

Contractual liability limitations and AI

The principle of freedom of contract in English law allows parties to a contract to agree or to exclude liabilities for breach as they see fit. That principle is partly limited in B2B contracts, and most particularly in B2C contracts, by the imposition of minimum implied terms for fitness for purpose and satisfactory quality, which in B2C contracts are mandatory and cannot be excluded.  Agreed contractual terms if broken, give the party suffering the breach a remedy in an action in damages for the foreseeable loss suffered, or for specific performance of obligations breached. Contract law can give specific warranties relating to performance and liability for defects in AI products and services, to the contract parties or other classes of users, operators, or persons adversely affected by defective AI.

 

Generally, a claim in contract is limited to claims brought by the named parties. However, the persons entitled to bring claims can be extended to named persons or classes of persons under the Contracts Rights against Third Parties’ legislation; the principals of vicarious liability; the common law extension of voluntary assumption of liability, such as Volvo cars in announcing that it would accept liability for harm caused by their cars when operated autonomously.

 

From a consumer perspective it is comforting that in B2C contracts, there are terms protecting consumers and mandatory rules imposing minimum terms of fitness for purpose and that the product will be of satisfactory quality, or the service will be delivered with reasonable care and skill. However, as AI opens new frontiers to automated services being provided in matters such as financial services, robotic process outsourcing, and AI as a service, the  terms that the law will imply in these sectors have not caught up with the development in the technology that moves at a much faster pace than the law. Unfortunately, the UK does not yet have any detailed plans to implement such legislation, to fill the liability gap.

 

Extra-contractual liability

This liability exists in three parts: liability in the law of negligence; product liability and liability for breach of statutory duty. Negligence is the liability that stems from conduct that fails to conform to a required standard. Product liability provides for liabilities of manufacturers of defective products and those in the supply chain. Civil claims for breach of product safety law give rights to consumers and others suffering special losses to claim for losses arising from breach of regulations to protect consumers or to promote safe products.

 

Negligence limitations and AI

Extra contractual liability in negligence in the UK, and most  civil law jurisdictions, including those of France and Germany, have liability regimes for conduct that fails to conform to a required standard.  English law of negligence provides that where a claimant can show (a)the existence of a duty of care, (b) a breach of that duty, (c) that causes foreseeable consequential damage that is not too remote, the claim is likely to be successful.

 

While in some ways, the law of  negligence can be adapted to issues of developing artificial intelligence technology liabilities, it also has some limitations. First, the standard of care is the human standard of care that would apply to the average reasonable person in the same situation. With artificial intelligence, this is not possible as artificial intelligence has no human comparator. Second, the foreseeability of damage in human terms has no direct comparator in artificial intelligence terms. Third,  claims for pure, economic loss are not possible in the law of negligence.

 

Product liability law limitations and AI

The European Union implemented the 1985 Products Liability Directive, (PLD) which was transposed into UK law by the Consumer Products Act 1987. Under the PLD, producers of products and other intermediaries are strictly liable to persons, who are injured or whose personal property is damaged by a product, if it can be proved that the product is defective. A product is considered defective when it does not provide the safety that a person is entitled to expect, taking into account:- (a) the presentation of the product and; (b) the use to which it could reasonably be put at the time at which the product was brought into circulation. “Products” are defined as” any goods or electricity and includes products integrated into other products, whether as a component part, raw materials or otherwise”.

 

There are several significant limitations with regard to the  PLD remedies that can be given to persons or property damaged by AI:-

 

  1. It is doubtful that the definition of “product” extends to software and artificial intelligence unless that software or artificial intelligence is embedded in some hardware being part of the product. It is far more likely that artificial intelligence will be excluded from the product liability regime in most cases because it has more characteristics of a service rather than a product as artificial intelligence generates custom-made output, based on individual input from a user.
  2. PLD claims are built on an assumption that a product does not continue to change in unpredictable ways once it has been manufactured. Liability is specifically excluded if the defect did not exist when the product was put into circulation, or where the state of scientific and technical knowledge, at the time that the product was put into circulation, was not such as to enable the defects to be discovered.

 

  1. The PLD regime is not universal as it does not apply to defective services, and it is not in the UK linked as closely as it should be to a rapidly evolving product safety regime that takes account the developments in AI technology.

 

  1. In PLD claims the claimant must prove that the product was defective when it was put on the market which in the case of liability arising from artificial intelligence issues, can be very complicated. This is because with the two main types of artificial intelligence systems being artificial neural networks and probabilistic Bayesian networks, it is very hard to determine how or why a machine learning system made a particular decision. This can set up an impossibly high barrier to a claim being brought without a claimant having disclosure of all the detailed workings of the AI system and without there being any rebuttable presumption of liability where there is a clear link between the AI and the loss and damage.

 

Those limitations just described do not make the PLD fit for purpose in providing adequate legal protections to those affected by AI outcomes.

 

Civil claims for breach of product safety law and AI

Civil claims for breach of product safety law give rights to consumers and others suffering special losses to claim for losses arising from breach of regulations that protect consumers or promote safe products. The EU has substantial regulations in place to ensure consumer protection and safety of products to prevent the occurrence of harm to businesses and individuals. English law provides a limited and restricted right for civil claims in damages to be brought for breaches of those regulatory laws. Claims can only be brought where the regulatory law does not provide penalties for breach or other means of enforcement and then is limited to a small class of potential claimants.

 

Summary

The civil law entitlement to claim damages for loss suffered resulting from the breach of the product safety regulations are very limited, as are the remedies available to persons or property damaged by AI.

 

  1. Not all product safety laws permit civil enforcement.

 

  1. They generally do not apply to the provision of defective services.

 

  1. The UK product safety regime is falling behind the development of technology particularly in relation to A Following Brexit, the UK has no plans to adopt any of the immediate changes in product and services safety law in relation to AI, or other digital products or services. This is a contrasting approach to EU where it is progressing with its Artificial Intelligence Act, the proposed EU Machinery Regulations, the proposed EU General Product Safety Regulations, the EU Digital Markets Act and the EU Digital Services Act.

 

Takeaways

The takeaways from these descriptions of civil liability law and artificial intelligence are: –

  1. The UK regime is not fit for purpose for AI liability claims in contract, negligence, product liability or product safety.
  2. Post-Brexit the UK has become a rule-taker, not a rule giver in its biggest export market. The lack of development of UK law to deal with AI liabilities means UK businesses will have to comply with one set of looser standards in their relatively small home market and stricter standards in their relatively larger EU export market. The reverse will be the case for EU exporters to the UK putting EU exporters at a comparative advantage.
  3. Contracts should address the risk and cost allocation for defective AI products or services to minimise AI providers’ or users’ exposure to liability.

 

This article is based on European Product Liabilities (Butterworths) edited by Patrick Kelly and Rebecca Attree. It provides an overview of the law. Specific cases need specific advice.

The position is stated as at 1.11.2022.

 

REBECCA ATTREE, Mediator, IPOS Mediation

 PATRICK KELLY, Partner, Laytons ETL

 


Mediation and the art of verbal communication

Voice. It’s not the first thing that comes to mind when you consider what makes a successful mediation, is it? Yet a mediator’s verbal communication skills are a key part of engaging parties and helping them move towards settlement.

A mediator will use their voice in a myriad of subtle but powerful ways. They’ll ensure the volume is at a suitable level to convey their authority without parties struggling to hear them, and that the pitch is crisp and clear, which is particularly useful for people for whom English isn’t their mother tongue. They may emphasise certain words to get someone to focus on why those words are particularly important to them or switch from a purposeful tone – where their voice goes down at the end of a sentence – to an upward, questioning tone to invite the party to explore a point further.

Throughout the mediation session, the mediator will vary the pace, tone, volume and cadence of their voice to create a special kind of rhythm and melody. By using their vocal skills, they give parties the time they need to slow down, absorb and consider information while also creating momentum to keep the negotiation going.

 

Setting the tone
We know that people mirror each other, so when a mediator uses a calm, peace-seeking tone it starts the session off in the right way. This is especially important in the joint open session, if there is one, where it’s not what is said so much as how it’s said. That same mirroring can be used to deflect any potentially explosive situations where parties are raising their voices. Assuming the mediator has significant authority and established trust, if they lower their voice the parties are likely to follow.

 

A measured pace
Taking time at the beginning of a mediation session can often speed up progress further down the line. To that end, during the early stages of a session the mediator may adopt a slower vocal pace and create voice punctuation, or pauses, to give people more time to process ideas and gain clarity. Mediators are very comfortable with their own vocal pauses before speaking, too. It’s a purposeful action, demonstrating that they’ve thought carefully about what they’re about to say.

Pauses are something to be encouraged during a mediation. People often pause before they say something important. If the mediator doesn’t rush to fill the silence but allows it to happen, what the party says next can be pivotal. Also, a pause may indicate hesitation and uncertainty. A mediator who’s attuned to the tone, cadence, and rhythm of someone’s voice will pick up on this and gently prompt: ‘Tell me a bit more about that.’

 

Keeping up momentum
Part of a mediator’s role is to motivate parties and bring a sense of optimism to the session. By varying the intonation, speed, and volume a mediator can use their voice to convey energy and enthusiasm, which is particularly important in the early afternoon when sometimes it can look as if a solution is going to be hard to find. Speaking with energy can lift the parties and keep them focused on their goal of reaching a settlement.

A mediator will also pick up on the energy of the parties from their voices. When someone starts speaking in a monologue-like way it can indicate that they’re getting tired, so a mediator may suggest taking a break for a walk or a cup of tea.

 

Natural skills
All these vocal ‘techniques’ are subliminal. They’re part of an experienced mediator’s subconscious, something they do naturally. It won’t be obvious how a mediator uses their voice to help parties engage in getting to the heart of the matter, or keeps them motivated and optimistic. It’s unlikely parties will realise that during online mediations, when subliminal messages conveyed in the voice are often received more acutely than in person, the mediator has adapted their vocal communication to bring an extra level of care, awareness, and sensitivity to how things are communicated and received.

People may not realise what a mediator is doing, but they know when a mediation works well. If they get to the end of the mediation feeling the experience has been more positive than they had initially expected – well, that’s all down to the skill of the mediator who made it so.

 

By Rebecca Attree and Amanda Bucklow