Arizona Antelope Canyon

UK publishes official guidance to judiciary on use of AI

December 13, 2023

On 12 December 2023, the UK published official guidance to the judiciary on the use of AI in litigation in England & Wales.

A summary of key points and take aways is below. The guidance is accessible here.

Background

The use of AI in English litigation has been established for several years, primarily through AI tools used in the document review process such as predictive coding. The use of predictive coding is expressly permitted under the Civil Procedure Rules.

With the recent emergence of generative AI tools using large language models such as ChatGPT, the role of AI in litigation is receiving increasing focus. The fast pace of change means that updates to the courts’ rules inevitably lag behind the development of new AI applications.

Please see our briefing here for further details on the current AI landscape in English litigation.

UK judicial guidance

The UK has now issued guidance to judges to assist and set expectations relating to the use of AI in litigation. The guidance was issued following a consultation by four very senior judicial office holders: the Lady Chief Justice of England & Wales, the Master of the Rolls, the Senior President of Tribunals and the Deputy Head of Civil Justice.
Formally, the guidance applies to judges rather than directly to parties/legal representatives, but it set clear expectations of general application that parties and legal representatives should be alive to.

Some key points emerging from the guidance are:

  • In principle, the use of generative AI in litigation is permissible by both judges and parties/legal representatives, provided that it is used responsibly, and appropriate safeguards are followed. However, the use of generative AI for legal research and legal analysis is currently “not recommended”. The guidance suggests that “potentially useful tasks” include summarising text, assisting with preparing presentations and administrative tasks such as preparing emails.
  • Legal representatives must be mindful that they are “responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate.” There is no blanket duty to disclose the use of AI in submissions to the court but this is a judgement call: “provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.” There is a specific warning that currently available large language models tend to be trained primarily based on US law rather than English law.
  • The use of generative AI to assist judges and their assistants in research or preparatory work to produce judgments is permitted as “a potentially useful secondary tool” provided that the guidance is appropriately followed.
  • AI chatbots should not be used to confirm factual matters in litigation, but “may be best seen as a way of obtaining non-definitive confirmation of something”. The accuracy of any information provided by an AI tool must be checked before it is used or relied upon. Judges must be aware of the risk that information provided by AI tools is inaccurate, out-of-date or fictitious or biased. 
  • Judges are not permitted to enter confidential or private information into a public AI chatbot and maintain security best practices at all times (guidance which legal representatives would be well advised to also follow).
  • Judges must be alive to the fact that unrepresented litigants are likely to be more reliant (or wholly reliant) on AI chatbots than represented litigants. Judges must also be aware of the risk that AI can be used to produce sophisticated fake material, i.e. “deepfakes”.

Key take aways

This guidance represents a measured and incremental approach being taken towards the use of generative AI in litigation in England and Wales by the judiciary. It is also notable that the guidance takes a principles based rather than prescriptive approach, i.e. the judges and legal representatives may use generative AI in principle, but must take into account safeguards and their professional duties at all times.

While in formal terms the guidance is addressed to judges, it contains important considerations that legal representatives should take into account.

Given the pace of change in this space, it is inevitable that this guidance will develop over time. It is also likely that guidance or rules specific to legal representatives and/or amendments to the Civil Procedure Rules will be introduced in due course.