New frontiers: Regulating artificial intelligence in international arbitration
Global | Publication | septembre 2024
Introduction
With the proliferation of artificial intelligence (AI) tools and their increasingly widespread use, there is a compelling case for specific frameworks to regulate AI in international arbitration. The development of such frameworks remains at a nascent stage. This article will consider the guiding principles which should inform the development of appropriate frameworks, examine the innovative approach in the Silicon Valley Arbitration & Mediation Centre (SVAMC) Guidelines on the Use of Artificial Intelligence in Arbitration (Guidelines) and draw lessons from national courts’ responses to the use of AI in litigation.
Use of AI tools in international arbitration
The increasing availability of generative, evaluative or discriminative AI tools presents new opportunities and challenges for international arbitration practitioners and other stakeholders. These innovations can be used to promote efficiency without compromising the integrity and quality of international arbitration as a dispute resolution process. The ability to automate routine tasks such as document review, evidence management, translation and summarizing complex documents can offer cost efficiency and free up lawyers to undertake more complex tasks.
At the same time, AI tools raise concerns about transparency, bias, accuracy, confidentiality and due process. For example, AI tools are trained using historical data, which risks perpetuating biases or inaccurate results. Real-life examples have demonstrated results produced by AI can be manifestly wrong, generating nonexistent case citations and quotes (for example, Mata v. Avianca, Inc. 22-cv-1461 (PKC) (S.D.N.Y. Jun. 22, 2023) and Darlene Smith v Matthew Farwell & Ors 2282CV01197 (Mass. Sup. Ct. Feb. 12, 2024)).
Application of existing arbitral rules to the use of AI
Arbitral tribunals have broad discretionary powers to manage the procedure in arbitral proceedings. This discretion arises both in national laws applicable to arbitration and in the rules adopted in the proceedings. For example, Article 17(1) of the UNCITRAL Arbitration Rules provides:
“…the arbitral tribunal may conduct the arbitration in such manner as it considers appropriate, provided that the parties are treated with equality and that at an appropriate stage of the proceedings each party is given a reasonable opportunity of presenting its case. The arbitral tribunal, in exercising its discretion, shall conduct the proceedings so as to avoid unnecessary delay and expense and to provide a fair and efficient process for resolving the parties’ dispute.” (emphasis added)” (emphasis added)
This discretion is usually broad enough to include the use of technology including AI. However, reliance on broad tribunal discretion over the management of procedure does not necessarily:
- deal with disclosure of what, when and how AI is used;
- help those involved in arbitral proceedings to understand the acceptable use cases for AI; or
- provide parties, counsel or tribunal members with clarity about how they can expect AI to be managed throughout the arbitral process.
Without specific frameworks to regulate the use of AI in international arbitration, there is a risk of inconsistent approaches or a devolution to the various different national laws (to the extent that they deal with AI) and any applicable professional standards. Several professional organizations have published guidelines for practitioners on the responsible use of artificial intelligence, for example, the New South Wales Law Society Responsible AIGuide and the American Bar Association Ethics Opinion. Like the response to virtual hearings during the COVID pandemic, specific frameworks dealing with technological innovations can offer greater transparency, predictability and consistency.
Guiding principles
International arbitration is founded on the principle of party autonomy. As such, fairness, accountability and transparency are fundamental to maintain trust and confidence. The use of AI in international arbitration should therefore be guided by the following core principles:
- The need for appropriate human oversight and responsibility;
- Transparency; and
- Confidentiality.
The need for human oversight and responsibility
Arbitral proceedings commonly involve complex legal and factual issues and require lawyers to provide strategic advice to their clients. Human experience and expertise are crucial to test results, understand context and draw appropriate inferences. There remain many elements of human judgement and persuasion that machines cannot replicate. AI tools should be utilized as a technological aid for lawyers, rather than a substitute for their expertise.
Human oversight is therefore necessary to examine the results produced by AI tools to identify errors or biases and account for cultural and emotional nuance. As AI tools are developed by using past data, human involvement ensures consideration of circumstances and exceptions that may depart from previous practice and which would not otherwise be captured by algorithms based purely on precedent.
As AI systems become increasingly sophisticated and autonomous, it is crucial to have clear accountability for decisionmaking. Responsibility for incorrect, biased and flawed outputs should not be delegated to an algorithm.
Transparency
Some AI tools operate as ‘black boxes,’ meaning that, while results are the product of data having been processed, the manner in which the decision has been reached is not known. Where the results generated by AI tools may be relied upon to make decisions, practitioners must educate themselves on how these tools work and their limitations.
The need for transparency and accountability may also require that in appropriate cases, practitioners (including arbitrators) disclose when and how AI tools have been used.
Confidentiality
The use of AI tools in international arbitration may require practitioners to input client confidential data into third-party systems. It is crucial that such systems have adequate data protection and confidentiality standards.
SVAMC Guidelines
The SVAMC published guidelines on April 30, 2024 on the use of artificial intelligence in arbitration. The Guidelines are the first to offer an international standard on the use of AI in international arbitration. SVAMC says the Guidelines “offer a set of best practices for the use of AI in international arbitration” and “seek to address both current and future applications of artificial intelligence from a principled framework, while also bearing in mind that the technology will continue to evolve rapidly.”
The stated aim of the Guidelines is to empower parties to benefit from AI tools while mitigating risks to the integrity and fairness of proceedings. The Guidelines set out principles that are divided into three categories:
- guidelines for all participants;
- guidelines for parties and their representatives; and
- guidelines for arbitrators.
The Guidelines adopt a human-centric approach to the use of AI in arbitral proceedings aimed at promoting fairness, efficiency and transparency:
- Guidelines 1, 4 and 6 provide for human responsibility for decision-making: This makes participants who use AI tools responsible for understanding how they work (at least at a basic level) and their intended uses, including relevant limitations, biases and risks and, to the extent possible, how to mitigate those risks. Equipped with this knowledge, practitioners are able to adapt their use accordingly. The parties and their representatives bear ultimate responsibility for errors or inaccuracies in output produced by an AI tool and should review that output to verify its factual and legal accuracy. Likewise, arbitrators may not delegate any part of their personal mandate to any AI tool, particularly the decisionmaking process. The use of AI tools by arbitrators shall not replace their independent analysis of the facts, law and evidence.
- Guideline 2 and confidentiality: This highlights the need to ensure that use of AI tools is consistent with confidentiality obligations and reminds participants that they should not submit confidential information to any AI tool without appropriate vetting and authorization, including of data use and retention policies.
- Guidelines 3, 5 and 7 address transparency: Interestingly, these preserve the discretion of parties regarding disclosure of AI tools. Unlike the practice that appears to be emerging in national courts, which tend to require parties to disclose the use of AI tools, the Guidelines provide that disclosure is “not necessary as a general matter” and decisions should be made on a case-by-case basis. Where disclosure is made, Guideline 3 states that information about the name, version and relevant settings of the tool used, a short description of how the tool was used and the complete prompt may help reproduce and evaluate the output of an AI tool. Guideline 5 operates as a safeguard, preventing parties, their representatives and experts from using AI in ways that affect the integrity of the arbitration or otherwise disrupt the conduct of the proceedings as well as obvious prohibitions on falsifying evidence or misleading the tribunal and opposing party. At the decision-making end, Guideline 7 prevents arbitrators from relying on AI-generated information outside the record without prior disclosure.
The Guidelines are not intended to replace or override local laws or regulations but to serve as a supplementary international standard that provides a common denominator for AI’s ethical and effective use in international arbitration. They include a model clause or order that can be included in an arbitration agreement or procedural order for the purpose of adopting the Guidelines in arbitral proceedings:
“The Tribunal and the parties agree that the Silicon Valley Arbitration & Mediation Center Guidelines on the Use of Artificial Intelligence in Arbitration (SVAMC AI Guidelines) shall apply as guiding principles to all participants in this arbitration proceeding.”
Court approaches
National courts are also grappling with the use of AI in domestic legal proceedings. Regulation in domestic civil procedure remains piecemeal but courts in certain jurisdictions have issued rules and guidelines that provide a useful point of comparison for international arbitration.
For example, the Supreme Court of Victoria, Australia has recently published guidelines on the use of AI tools by litigants, and the UK and New Zealand courts have published guidelines on the use of AI tools by judicial officeholders and their support staff. These guidelines emphasize similar themes to the SVAMC Guidelines: awareness of the limitations of AI tools, confidentiality and accountability for research and decision-making. For judicial officers and their staff, the use of AI tools for tasks such as summarizing information, planning speeches and straightforward legal research is identified as potentially helpful but use for legal analysis is not recommended.
Notably, the guidelines diverge on the issue of disclosure. The UK and NZ guidelines for judicial officeholders do not require the use of AI by judges to be disclosed, whereas the Victorian guidelines for litigants do recommend that parties and their practitioners should ordinarily “disclose to each other the assistance provided by AI” to the legal task undertaken and, where appropriate, “the use of AI should also be disclosed to other parties and the court.”
This concern with transparency can also be seen in a standing order made by the New York State Supreme Court and a practice direction issued by the Court of King’s Bench of Manitoba, Canada, which require disclosure of the use of AI tools to prepare materials filed with the court.
Conclusion
Given their impact on efficiency, it is inevitable that AI tools will continue to proliferate in international arbitration. Appropriate regulation of the use of AI tools in international arbitration is likely to be regarded as ever more important to ensure fairness, accountability and transparency.
The SVAMC Guidelines are a principled, human-centric approach to governing the use of AI and provide a useful initial framework for all participants in international arbitration. In appropriate circumstances, parties can also consider a bespoke agreement on the use of AI in their disputes, either as part of the arbitration agreement or in procedural orders once an arbitration has commenced.
Subscribe and stay up to date with the latest legal news, information and events . . .