The recent explosion of interest in ChatGPT has sparked discussions about the impact generative artificial intelligence (AI) will have in countless industries, including law. The ability to instantly produce clear writing on virtually any topic creates significant opportunity for efficiency and cost-savings. One area that holds promise for the use of generative AI is decision-writing for administrative tribunals. In this article, I will discuss the promises and pitfalls of generative AI in the legal field and why generative AI could be a useful tool for tribunals to assist with writing reasons for their decisions.

What is generative AI?

Generative AI is a category of AI that generates outputs and content (including text, images, or audio) based on data it has been trained on. ChatGPT is a form of generative AI. In essence, the user inputs a question or prompt and the system produces a response based on prior information it has reviewed and analyzed.

How is generative AI being used in the legal profession?

Generative AI is being used in the legal profession in a variety of ways, including document processing and classification, legal research, and document drafting.[1] Tools have now started to emerge that replicate the interface of ChatGPT, but are designed specifically for legal issues.

One example is Harvey AI, an artificial intelligence tool created specifically for the legal profession. It is backed by the same company (OpenAI) as ChatGPT. It operates in a similar manner to ChatGPT with users able to ask questions to legal questions. However, it can also produce legal documents, such as contracts, research memos, or pleadings. Harvey AI is being used by Allen & Overy, one of the largest law firms in the world. According to the law firm, 1 in 4 lawyers at the firm used Harvey AI every day and 80% use it at least once a month.[2]

Some legal professionals have begun using generative AI systems not specifically designed for law to perform legal work. For instance, a judge in Colombia made international headlines when he acknowledged that he used ChatGPT to help him decide a case.[3]

Generative AI has shown an ability to answer legal questions with high degrees of accuracy. GPT4 (the more advanced version of ChatGPT) passed a simulated bar exam in the top 10% of test takers.[4]

What are the risks of using generative AI in law?

One of the primary risks of using generative AI for legal work is that it will at times produce content that is inaccurate or untrue. This is called a hallucination – where the AI makes up an event or fact. For instance, a law professor in Canada asked ChatGPT to create a list of research articles for him on a specific topic. While ChatGPT produced a long list of articles for him, the problem is that they were all completely made up.[5] In another case, a judge in Michigan asked ChatGPT why a decision he wrote was decided the way it was and ChatGPT made up various cases.[6]

The risk of AI creating false or inaccurate information means that where generative AI is used to produce legal work, it must always be checked and verified. Allen & Overy created a list of rules for using Harvey AI, which include that everything produced by the system must be validated.[7]

Many commentators have also stated that generative AI is not at a point where it is able to draft an entire contract or legal document on its own. While generative AI can be used to create a first draft, human judgement is ultimately required to ensure that the legal document is responsive to the facts at hand and adequately addresses all material issues and risks.[8]

Another issue with generative AI is that it is only as good as the data it is trained on. This means that there are risks the data can be manipulated to create biased or incorrect responses. It can also mean that the system is trained on bad legal work or outdated or obsolete information. The user of the AI system will rarely know what data the AI system has been trained on, making it difficult to know if these issues are present.[9]

There are also concerns with the use of confidential and privileged information. If privileged or confidential information is inputted into an AI system, that information may be used as part of the training data for the system and possibly used elsewhere.[10] Until robust privacy mechanisms are enabled, it would be unwise to input confidential or privileged information into an AI system.

What are the possibilities for AI in drafting tribunal reasons?
Chat GPT is only a prototype, yet can produce surprisingly well-drafted reasons when asked.[11] I asked it to produce a set of tribunal reasons for a discipline decision involving false billings to an insurer. While the reasons were brief and the facts were fictitious, I was surprised how well the reasons followed the typical structure and language of a discipline decision. If an AI system is specifically trained on a tribunal’s own decisions and caselaw and is provided all the evidence from the hearing, it seems probable that AI could generate a well-written and robust first draft of reasons for a tribunal’s decision.

There are a variety of reasons why generative AI holds promise for the writing of a tribunal’s reasons:

  • Many tribunal adjudicators are not legally trained, yet are expected to write reasons that can withstand judicial scrutiny. AI can be trained on well-drafted reasons and the necessary legal principles to generate a first draft or template that prompts the decision-maker to address all the requirements the court expects from tribunal reasons.
  • Decisions from a tribunal normally follow the same structure. In fact, tribunal adjudicators are often given a template to use in drafting their decision. Generative AI is particularly well-suited to the creation of documents that follow a pattern or repeatable structure.[12]
  • Significant portions of tribunal decisions are often just a summary of the issues and evidence. AI could be used to create a first draft of these parts of a decision, while leaving the analysis section to be written by the decision-maker.
  • Many tribunals have a tremendous volume of cases, which makes it difficult to write and release reasons in a timely manner. For regulators, this includes not only the tribunals, but complaints and screening committees. Some regulators have such high volume of cases that they hire decision-writers whose full-time job is to assist tribunals and committees write reasons. Generative AI could significantly reduce the burden of decision-writing by automating the aspects of writing that are time-consuming.
  • Given that most administrative hearings are open to the public, the risks of inputting exhibits and transcripts from a hearing into an AI system are lessened. As those documents are publicly accessible, this may address concerns regarding confidentiality that can be an issue with AI systems.

While generative AI holds tremendous potential, it will never be a substitute for human decision-makers. In addition to the concerns with accuracy and the use of judgment in rendering decisions, there are moral and legal issues in delegating actual decision-making to AI. A recent Federal Court decision suggested that AI can be used to help draft a decision, so long as the decision itself is made by the administrative decision-maker.[13] Where AI holds promise is in automating aspects of decision writing that are time-consuming, but do not involve actual judgment or decision-making, such as summarizing the evidence and organizing the reasons. While there is not yet a system in place that would allow tribunals to employ generative AI, it is something worth monitoring and will likely be commonplace sooner than we think.













[13] Haghshenas v Canada (Citizenship and Immigration), 2023 FC 464 at paras 24 and 28.


Subscribe to our free newsletter.