REPOSITORY OF ARTIFICIAL INTELLIGENCE (AI) INITIATIVES IN CANADIAN COURTS AND TRIBUNALS
A Statement from the Action Committee
Our Committee supports Canada’s courts in their modernization efforts. It provides guidance for addressing challenges, and highlights opportunities and innovative practices to modernize court operations and improve access to justice for court users.
1. CONTEXT
In its initial suite of artificial intelligence (AI)-related publications, the Action Committee sought to demystify the field for courts and court users, including a plain-language overview of key points; provide guidance for courts on the use of AI both internally and by court users; and highlight some common AI legal research tools. This repository aims to complement this work in a concrete manner by highlighting different AI initiatives from jurisdictions across the country. The focus is on court and tribunal-led AI work rather than independent use of the technology by court and tribunal users.
Consultations with courts, tribunals, and their administrators revealed a range of approaches to implementing AI, as well as different barriers, concerns, and desired goals when considering whether and how to use AI in their operations. While AI continues to become more integrated in many peoples’ everyday lives, leveraging it appropriately to maximize efficiencies while minimizing risks requires careful planning, ongoing monitoring, and gradual implementation. As a result, many jurisdictions’ engagement with AI is currently in the early research and exploratory stages.
In this context, AI is a broad term referring to digital technology that performs tasks typically associated with human brainpower, including understanding and interpreting language, learning, artistic creation, and abstract problem solving. AI itself comprises multiple subfields, with further specialization within each. By nature, AI evolves both because of, and in response to, developments within these subfields.
2. EXAMPLES
The profiles featured below are divided between courts and tribunals, and organized alphabetically by jurisdiction. These examples highlight both concrete applications of AI to help address specific problems and important steps underpinning the implementation of such initiatives. Overall, courts and tribunals are treating AI as a tool among others, which may or may not be appropriate depending on the circumstances, an approach that aligns with the Action Committee’s previous guidance. So too does the emphasis consultees placed on achieving the appropriate balance between improving access to justice through the use of AI and respecting judicial independence. This approach is consistent with the Canadian Judicial Council’s guidelines on judicial use of AI, which unequivocally state that judges can never delegate their decision-making authority. Other challenges identified through consultation concerned data management, security and confidentiality, and resource requirements.
This repository is current as of 2026-02-27. As other initiatives are developed, the Action Committee invites courts, tribunals, and justice stakeholders to forward any relevant information to the following address: AC-secretariat-CA@fja-cmf.gc.ca. This information may be used to inform future Action Committee publications or to update this repository.
2.1 Courts
2.1.1 BC Courts Services Branch – Key Initiatives Related to AI
With a well-established history of using emerging technology to improve efficiency and access to justice, supported by significant interest in innovation at the political and judicial levels, the Court Services Branch (CSB) of British Columbia’s Ministry of the Attorney General takes a problem-first approach to the implementation of new technologies. This means that, rather than starting from the assumption that deploying AI would be appropriate or useful, the CSB considers it as simply one of many tools that could help resolve an identified issue in a concrete manner. The CSB has currently identified three areas where AI might be a useful tool to improve its processes: 1) document review; 2) Deputy District Registrar training and support; and 3) transcription.
Document review
The CSB is experimenting with an AI-augmented tool to support the filing of high-volume forms. Using Optical Character Recognition (OCR) augmented by AI, this tool converts PDF documents filed electronically by court users into machine-readable text, eliminating the need for a clerk to manually enter the information contained in the form into the court’s case management system. The CSB’s Court Services Online application can then assess whether the information is sufficiently accurate and, if so, it automatically files the form electronically. Where accuracy falls below the specified threshold, the tool flags areas for human review. The tool also applies and issues a stamp certifying filing in real time. Although the tool is currently only used in responses to civil claims, the CSB aims to eventually expand it to other high-volume forms. The CSB has also recently leveraged online guided pathways to populate online forms with necessary information. The user simply answers plain language questions and does not need to directly interact with the forms to fill them. As the adoption of this approach becomes more widespread, it is anticipated that the intermediate reading and converting function of the AI filing tool will no longer be needed.
Deputy District Registrar (DDR) training and support
The CSB is exploring the possibility of using AI to assist in training and supporting DDRs, whose public-facing role requires them to respond rapidly to queries on a wide variety of subjects. The proposed tool would function similarly to ChatGPT but be restricted only to specific, relevant source materials – such as court rules, legislation, or notices to the profession – inputted by the CSB, therefore mitigating the risk of hallucination (false or inaccurate data). Providing DDRs with reference links embedded in responses would enable them to confirm accuracy. While the DDRs would retain responsibility for decision-making, the tool could help them pinpoint relevant information on topics they deal with infrequently and therefore provide their services more efficiently.
Transcription
With the proper framework and safeguards in place, AI can improve access to court transcripts. For example, the CSB is exploring the use of AI and large language models to convert official court audio to text. These could then be integrated with court documents to create a final product that is easier to read and navigate than traditional transcripts. While the CSB will continue to employ humans with specialized expertise to produce official transcripts, the rough or unofficial versions produced by an AI tool could provide significant cost savings in contexts such as trial preparation or note review by court users or judges. In preliminary testing, the CSB has found this technology to perform quite well, and noted that audio quality, rather than variations in language or accents, poses the biggest challenge to AI transcription. However, due to the risks of discrimination resulting from the fact that many tools are built from an anglophone, North American perspective, the CSB remains attuned to possible challenges associated with AI accuracy in the face of different accents. The CSB is also sensitive to particular Indigenous language requirements, in keeping with the province’s broader efforts to incorporate Indigenous alphabets in government systems.
2.1.2 Courts Administration Service – Key Initiatives related to AI
Innovation is central to CAS’s mission of supporting Canada’s national Courts and facilitating access to justice. CAS embedded innovation across its operations, leveraging technologies such as AI to modernize judicial administration and improve service delivery. CAS has implemented several AI initiatives, including the Federal Court of Appeal (FCA) Amicus tool that is an artificial intelligence assistant to registry employees and the application of neural machine translation (NMT) to the translation of judicial decisions.
Amicus
Amicus acts like a “friend of the court” and is intended to provide FCA registry staff with quick, accurate and context-specific answers to questions about Registry procedures, tools and operations. The tool was launched in January 2026, with overwhelmingly positive feedback.
Amicus provides accurate and sourced information from a database populated by the registry. The bilingual tool also provides consistent answers to similar prompts or questions and provides users with the ability to give positive or negative feedback to each answer received.
By implementing this tool, the registry sought to address the following issues:
- Procedural information was not easily accessible; it was scattered and difficult to find the authoritative source of information. Accordingly, work instructions were not systematically followed, leading to inconsistent practices across the country.
- There are many details to remember in order to accomplish the work, especially in regional offices serving multiple courts, leading to reduced quality of services.
- FCA has a lower volume, therefore, exposure to FCA processes is less frequent, leading to higher risk of errors.
Since the launch of Amicus, national consistency and quality of services to internal and external clients is expected to improve, reinforcing national standardization of registry processes over time.
Neural Machine Translation Project
CAS has deployed a neural-translation AI solution to improve the accuracy, accessibility, and timeliness of court decisions in both official languages. This approach not only enhances procedural fairness but also shows how AI can be applied responsibly to uphold public service values such as transparency, linguistic duality, and equal access to justice.
The driver for adoption of NMT was to help manage increased translation obligations and operational requirements resulting from an amendment to the Official Languages Act (OLA) requiring that final decisions of “precedential value” be simultaneously available to the public in both official languages.
After completing an environmental analysis and consulting with multiple stakeholders, such as NMT service providers and federal government partners, CAS chose to assess three tools available on the market. Over a 6-month period (March-August 2024), the pilot project assessed these tools’ ability to both translate from English to French, and from French to English. CAS jurilinguists applied an assessment grid focusing on the quality of pretranslations generated with respect to 62 decisions across 13 areas of law. As CAS was already working in partnership with a software development company specializing in the automation of translation, it was able to use its services to assess the three NMT tools with the assistance of its existing translation platform.
CAS assessed these results alongside several other important elements like technological integration, information security and cost, to determine which tool best suited its needs.
For CAS’ translation team, the NMT tool is one tool among many and does not remove the need for a human expert in the loop: its jurilinguists. In keeping with best practices and judicial direction. CAS is committed to ensuring that all decisions translated with the assistance of the tool are carefully reviewed by a jurilinguist who understands the exact legal meaning of the words, bilingual and bijural conventions, and the subtleties and nuances of human language. This review also helps avoid unconscious bias which could impact the quality and fidelity of translated decisions.
The output, functioning, and improvement of a NMT tool depends on the quality of the corpus of decisions which feeds it. Its efficacy can therefore increase over time if trained appropriately. CAS’ NMT tool is used in an integrated manner with its existing translation platform, which acts as a “translation memory” and is updated weekly by jurilinguists. Careful and regular maintenance of the tool by jurilinguists, combined with their thorough, sentence by sentence translation and review, allows CAS to progressively improve its internal translation capacities, meet its obligations under the OLA, and maintain a high standard of quality for translations.
CAS is now exploring how the NMT tool could be expanded to other internal users, as well as for the translation of its own administrative documents.
2.1.3 PEI Courts Services – A Methodical and Phased Approach
While facing challenges and opportunities unique to smaller jurisdictions, the approach of Prince Edward Island’s Court Services (CS) to exploring the use of AI provides several valuable lessons for any court or tribunal to consider. The CS recently piloted an AI tool to produce rough transcripts as a starting point to generating official human-led transcripts. The goal was to improve access to justice by reducing delays related to transcript production caused by a shortage of stenographers. The CS began with a clearly defined problem and undertook a methodical approach to testing this tool. The jurisdiction’s small size allowed for a more nimble, though no less rigorous, idea-generation phase, taking into account resource limitations. At present, the quality of output produced by the tool is not sufficient for use by the CS, even as a rough transcript, but they continue to engage with the transcription tool’s vendor to seek improvement based on regular feedback.
Step-by-step process
- Identifying the problem: the issue of timely production of transcripts emerged as a pressing concern. While the total caseload of the province’s courts might be lower in absolute terms than that of larger jurisdictions, the small number of court staff meant that the relative workload per person remained quite high, even with transcripts of civil proceedings being handled by external providers. Rough transcripts can both provide a basic, unofficial record of proceedings to assist judges in drafting decisions and offer a base that stenographers can refine to produce a certified transcript.
- Leveraging existing tool and circumstances: when the vendor already contracted by the province to provide audio recording services for its three courts launched a transcription service as an additional feature, the CS seized the opportunity to test the enhanced tool. It was able to act relatively quickly given the small number of officials and staff, which enabled internal discussions to take place quickly.
- Initial piloting: the CS purchased a limited number of enhanced licenses to test the transcription tool to determine whether broader implementation would be appropriate.
- Feedback cycles: staff were concerned about the quality of output produced by the tool and communicated this internally, with feedback then shared with the vendor. This prompted an ongoing conversation between the CS and vendor, including improvements made on the basis of this input and the provision of additional training and support sessions.
Key lessons learned
- Ensure everyone in the court shares a common understanding of the AI tool’s purpose by clearly communicating the goal and scope of the project, the testing process, and the targeted outcome. Clear communication can help avoid differences in expectations, such as whether the goal of the project is to produce rough transcripts for internal use or to create polished, edited transcripts requiring minimal human revision to become certified.
- Develop a project framework that structures how the AI tool will be tested. Include timelines for different phases and ensure that time and resources are set aside for staff to run tests, taking into account the demands of daily court operations.
- Identify a competent, knowledgeable champion for the project. This person will work with the tool to understand its strengths and weaknesses, while promoting the overall aim of the project. Based on their subject-matter expertise and interest in the project, the champion can assess and help to improve the tool while reassuring peers and other staff members.
2.1.4 Superior Court of Québec – Specialized Chatbots as a Judicial Support Tool
In Fall 2025, the Superior Court of Québec published its Artificial Intelligence Governance Framework and launched its project piloting chatbots as a judicial support tool. The Framework consists of guidelines aiming to structure a responsible, transparent, and considered use of AI, a priority that was included in the Court’s 2024-2026 Strategic Plan. With this approach characterized by caution, rigor, and openness, the Court embarked on its exploration of AI in a gradual fashion. The key principles of the Framework also represent the main pillars around which the pilot project was structured:
- Transparency and public confidence
- Judicial independence
- Support for the exercise of judicial functions
- Ethics and professional conduct
- Caution
- Responsibility
These also reflect the Canadian Judicial Council’s publications, notably its Guidelines for the Use of Artificial Intelligence in Canadian Courts, and its Ethical Principles for Judges.
Pilot project
The Court chose to focus on judicial support in the areas of legal research, drafting, and translation, rather than on administrative tasks, despite appreciating that these represent a promising avenue for AI more broadly. This perspective, which gave rise to the chatbot pilot project, was based on reflection considering both (i) the possible and acceptable uses of AI, while respecting judicial independence; and (ii) the current context, including many free, easy to use AI tools that are not tailored for the judicial field and entail numerous risks. The pilot project, which offers an appropriate alternative to judges, is characterized by the following main elements:
- A phased deployment allows the tool to be tested and the results applied as the project moves forward. Such experimentation takes place in a controlled environment, with a limited number of judicial participants. Each phase includes at least two cohorts of judges, so that their experiences can be compared and an overall assessment conducted.
The participation of judges is voluntary, and a limited yet diverse pool strikes the balance between limitation of risks associated with a larger-scale deployment, and learning more about the unique needs of different judges (for example: a puisne judge vs a chief justice; judicial or executive functions).
- Clear, continuous, and timely communication is essential – internally, with judges and court administrators, and with the public. The Framework, as well as sustained efforts regarding training, explanations in plain language, and increasing awareness, inspire confidence in the pilot project. These represent investments that are just as important as the resources allocated for the development or procurement of a tool itself.
- A well-defined structure from the outset of the pilot project ensures that practical, technological, and especially judicial independence-related limits are respected. This promotes precise communication, as the tool’s limits (what it does and what it does not do) are easier to express and the risk of them expanding throughout the project is limited. In this case, the chatbots:
- Refuse to answer any request to generate a decision (to take one or to create a judicial text)
- Have a limited capacity to summarize information (to avoid hallucinations)
- Are crafted solely to support judges; human control is always necessary
- Draw upon a defined pool of reliable legal sources
2.2 Tribunals
2.2.1 Administrative Tribunals Support Service of Canada – Innovative Spirit and Desire to Lead
The Administrative Tribunals Support Service of Canada (ATSSC) provides facilities and services to 12 of Canada’s federal administrative tribunals and two Territorial Labour Boards.
Centralized coordination within a single organization allows the tribunals – particularly the smaller ones – to access economies of scale that would otherwise be unattainable. The ATSSC delivers a full suite of services, including registry, legal, and member support, as well as essential corporate functions such as human resources, finance, information technology, and information management. In accordance with its mandate, the ATSSC cannot impose obligations on the tribunals it supports. While the ATSSC is adopting AI within its internal operations, each tribunal must independently determine whether and how to use AI, drawing on ATSSC guidance and support, as appropriate. Within this framework, the ATSSC has clear scope to examine how AI could enhance the effectiveness and efficiency of its service delivery.
The ATSSC has identified potential AI-supported improvements in quality, timeliness, and use of resources across several key areas. As such, it is undertaking multiple pilot projects to explore how AI could assist both its own operations and those of the tribunals it supports. The ATSSC is committed to balancing innovation with robust accountability, responsibility, and quality control. To ensure any use of AI aligns with its mission and purpose, the ATSSC approaches all potential projects through the lens of improving access to justice. With that goal in mind, its pilot projects all follow a similar methodology:
- Identifying a specific, well-defined issue or need for which AI might be an appropriate tool.
- Targeting individuals or teams across the tribunal client base who are open to participating in a pilot.
- Establishing a solid foundation of technological basics. This may mean several phases before layering in any AI tool, such as developing a non-AI chatbot or implementing a non-AI solution to build a lexicon of relevant translation terms before integrating this work into an AI-enabled tool. Testing multiple tools within one project helps ensure that the tool that is ultimately selected best responds to the identified need.
- Developing and implementing a strong governance framework to monitor progress and results.
The ATSSC is implementing different AI projects, which are in varying stages of completion:
- Pilot testing of an AI tool for legal research by ATSSC counsel has been successful and has been expanded.
- Pilot testing of AI translation tools is nearly complete. As the AI’s output has improved, legal editors have become more comfortable using these tools to generate the first draft of translations. Multiple teams used different translation tools to narrow the original field of five options down to one through a rigorous, evidence-based approach.
- Possible future projects include a chatbot that is solely trained on the information contained in a tribunal’s website to help users more easily navigate complex processes, as well as AI transcription tools.
Successful pilots not only present quantifiable benefits, but also aid in building trust and fostering an environment of openness to trying new things within an organization’s staff.
2.2.2 BC Civil Resolution Tribunal – Modeling a Cautious Approach to AI
The BC Civil Resolution Tribunal (CRT) is Canada’s first online tribunal, offering an accessible and affordable way for parties to resolve a variety of civil law disputes, and emphasizing collaboration to arrive at an agreement. The process takes place outside of court and does not require a lawyer. Most claims proceed through the same four main stages:
- Use the CRT’s free Solution Explorer to access legal information and tools and decide whether to make or respond to a claim
- Negotiate on the CRT’s secure and confidential platform
- If resolution is not achieved through negotiation, a CRT case manager will try to facilitate an agreement
- If neither negotiation nor facilitation are successful in reaching an agreement, an independent tribunal member will make a decision
CRT decisions and orders are enforceable in court, and parties can check on the status of their claim any time through their CRT Account.
The CRT has a unique perspective on AI since this tribunal is both inherently digital, and the first forum of this kind. While the internet was established about 30 years prior to the launch of the CRT, and thus considered a reliable tool, mainstream AI technology is a far more recent development. As such, the CRT’s AI Sub-Committee closely monitors AI to remain up to date on evolutions in the field and to allow for regular discussion and brainstorming on AI-related challenges and opportunities. Both technological and legal expertise among the Sub-Committee’s leadership is critical for guarding against premature implementation of AI, so that technical details of specific tools, substantive issues, and how these factors might interact are all rigorously analyzed. The Sub-Committee has determined that AI as it currently exists is not sufficiently reliable to be used in the CRT’s legal work.
In demonstrations attended by CRT representatives, even tightly controlled generative AI has provided incorrect information when given exclusively correct information on which to base its answer. As such, the CRT has concluded that material produced by generative AI, even under careful circumstances, cannot be relied upon by CRT staff or members without double-checking its output. The inefficiencies of doing so outweigh the theoretical benefits generative AI could provide.
Further, as the CRT has issued decisions criticizing and penalizing parties for careless use of generative AI, it considers that there is a real reputational risk in depending on the same tool for which it admonishes others.
That said, the CRT has implemented basic non-generative AI in one of its core features: the Solution Explorer. This is an example of a guided pathway, where a user’s answer to a question determines the next question(s), to efficiently direct them towards relevant information. None of the CRT’s content was generated with the assistance of AI; it was all created and organized manually, by staff. AI can streamline the technical process of moving through the various questions, but in this case has no influence on the actual information provided. At present, this is the CRT’s only use of AI.
However, CRT staff have access to AI tools embedded by default in corporate software. Those interested in using these tools for administrative tasks must get prior approval from their supervisor, conduct a trial, and report on the results. This methodology fosters an environment of openness to innovation, while limiting risks posed by AI by restricting its experimental use to the individual level and administrative sphere, specifically excluding legal work.
- Date modified: