Charting the Use of AI in Arbitration: A Closer Look at the CIArb Guideline (2025)
Earlier in 2025, the Chartered Institute of Arbitrators (CIArb) released its Guideline on the Use of AI in International Arbitration.
This blog post offers a summary of the key components of the CIArb Guideline, highlighting its areas of innovation and suggested procedural tools.
Scope and Purpose of the Guideline
The CIArb Guideline is framed as a non-binding soft law instrument and is explicitly intended to apply to international arbitration proceedings, whether ad hoc or institutional. It offers practical guidance on the responsible use of AI tools in a way that upholds procedural fairness, party equality, and the integrity of the process.
The document is organized into four parts and two annexes:
Part I: Benefits, Risks, and Definitions
Part II: General Recommendations
Part III: Parties’ Use of AI in Arbitration
Part IV: Use of AI by Arbitrators
Annexes A and B: Model clauses and procedural templates
Part I – Benefits, Risks, and Definitions
Part I of the CIArb Guideline sets the foundation for understanding how AI may affect arbitral proceedings. It begins by outlining the potential benefits (1.1-1.10), such as improved efficiency, cost savings, enhanced consistency in legal research, and support for procedural and administrative tasks. At the same time, it acknowledges the risks posed by AI use (2.1-2., including lack of transparency, bias, data security concerns, and over-reliance on systems that may produce unreliable or unverifiable outputs (e.g., hallucinations).
To provide a shared vocabulary, the CIArb Guideline adopts the OECD definition of Artificial Intelligence as a system that, for a given set of objectives, can make predictions or generate outputs influencing real or virtual environments. The CIArb Guideline also provides a useful vocabulary of AI-related terms (see Introduction) and includes annexes with practical templates. Annex A offers a model agreement on the use of AI, while Annex B contains sample procedural orders.
Part II – General Recommendations
Part II sets out outline general recommendations for all participants in arbitration proceedings, including arbitrators, parties, and counsel. The emphasis is on fostering a shared responsibility for the informed and careful use of AI in arbitration.
Key considerations include:
- Understanding AI Tools: users are encouraged to make reasonable efforts to understand the technology, functionality, and data behind any AI Tool used in the arbitration (3.1).
- Risk Assessment: Users should assess the potential risks and benefits of AI use- especially in relation to due process, the rule of law, environmental impact, and the credibility of arbitration (3.2).
- Legal awareness: The CIArb Guideline encourages, participants to consider applicable AI-related laws, regulations, or court rules relevant to their jurisdiction (3.3).
- Responsibility and Accountability: Crucially, it clarifies that the use of AI does not reduce a participant’s accountability. Unless expressly agreed otherwise, all parties remain fully responsible for their actions and submissions, regardless of whether AI tools were involved (3.4).
By setting out these general principles, the CIArb Guideline provides a foundation of diligence and mutual awareness, to be built upon by the more specific guidance in the sections that follow.
Part III – Parties’ Use of AI in Arbitration
Part III provides a structured recommendations for how parties may use I tools during arbitral proceedings. It addresses three main areas:
- Procedural oversight: Arbitrators retain broad powers to regulate AI use, including issuing directions, appointing (AI) experts, and requiring disclosure (of AI tool use) where necessary to preserve fairness and integrity. Non-compliance may lead to adverse inferences or cost consequences (4.1-4.7).
- Party autonomy: Subject to applicable law and rules, parties are free to agree on the scope of AI use in their arbitration agreement. Where silent, arbitrators are encouraged to raise the issue early and invite party input (5.1-5.4).
- Admissibility and Disclosure: Arbitrators may rule on the use or admissibility of AI-generated content: considering factors such as accuracy, bias, and evidentiary value(6.1-6-8). Disclosure may be required to ensure transparency and accountability throughout the proceedings (7.1-7.7).
Part III reflects a careful balance between party-driven flexibility and tribunal-led procedural control.
Part IV- Use of AI by Arbitrators
Part IV sets out broad principles for the tribunal’s own use of AI. Arbitrators may use AI tools to support their mandate, particularly for administrative or analytical efficiency (8.1). However, the Guideline underscores that arbitrators must retain independent judgment and remain accountable for all decisions (8.3, 8,4).
Arbitrators are advised not to delegate core legal functions – such as legal analysis or factual assessments – to AI. Any use of AI must be critically supervised and should not compromise the integrity or enforceability of the award (8.2).
Transparency is also emphasized (9.1-9.2). Arbitrators are encouraged to consult parties before using AI tools and to avoid use where objections are raised. Members of a tribunal are also encouraged to coordinate AI use among themselves to ensure consistency and procedural integrity.
Conclusion
The CIArb Guideline is one of the most comprehensive soft-law instruments to date addressing the fast moving intersection of AI and arbitral procedure. Aimed at arbitrators, parties, and counsel alike, it seeks to promote informed, transparent, and fair use of AI in international arbitration.
Whilst it does not make direct reference to previous documents, it complements and expands on earlier efforts, such as the Silicon Valley Arbitration and Mediation Center (SVAMC) Guideline on the Use of Artificial Intelligence in International Arbitration (2024) and the Stockholm Chamber of Commerce (SCC) Guide to the Use of AI in Cases Administered Under the SCC Rules (2024). These earlier publications addressed some of the initial challenges and questions around AI use in arbitration, while the CIArb Guideline takes a broader and more detailed approach, setting out comprehensive recommendations for all participants in the arbitral process.
Whist it remains a non-binding instrument, its comprehensiveness and structure may encourage adoption by arbitral institutions, tribunals, and parties seeking predictability and fairness in proceedings involving AI.
Click here to access the CIArb Guideline.
Piotr Wilinski would like to thank Guido Machado Peláez for assisting in the preparation of this article.