AI in Legal Writing: Hallucinations and other Risks

Artificial intelligence (AI) has been making inroads into various industries and the legal field is no exception. With the advent of powerful language models like ChatGPT, the prospect of using AI to assist in drafting legal briefs is a tantalizing possibility. However, as with any new technology, there are significant risks associated with the use of AI.

One of the most glaring risks is the potential for AI to create fictitious information, a phenomenon known as “hallucinations.” This issue came to light in a high-profile case where two lawyers in New York relied on AI-generated briefs containing citations to non-existent court cases. Despite seeking assurance from ChatGPT that the cases were real, these fabricated citations resulted in the lawyers facing sanctions for acting in bad faith.

A recent study conducted by researchers at Cornell University further underscores this concern, revealing that legal hallucinations occur a staggering 69% of the time when using ChatGPT. This alarming statistic serves as a cautionary tale, emphasizing the need for thorough fact-checking and verification when relying on AI-generated content.

While hallucinations are a significant concern, they are not the only risks associated with the use of AI in legal brief drafting. Other potential pitfalls include:

  • Inaccurate Legal Citations: AI tools may struggle with generating or formatting legal citations correctly, leading to inaccuracies that could undermine the credibility of a legal brief.
  • Jurisdictional Differences: AI systems may fail to recognize nuances in jurisdictional rules and conventions, resulting in the inclusion of irrelevant or improperly formatted citations.
  • Outdated Legal Information: If the legal databases used for training are not regularly updated, the AI may provide citations based on superseded or outdated authorities, posing grave risks in rapidly evolving areas of law.
  • Lack of Context: AI may struggle to comprehend the context in which a citation is being used, leading to the generation of technically accurate yet irrelevant citations.
  • Biased Training: If the AI system is trained on limited or biased datasets, it may inadvertently perpetuate biases present in the training data, affecting the accuracy and comprehensiveness of the citations provided.
  • Unconventional Citation Formats: AI tools may encounter difficulties when dealing with less common or specialized citation styles used by legal professionals.
  • Lack of Judgment: Perhaps most crucially, AI lacks the nuanced judgment and legal expertise of human professionals, potentially failing to grasp the subtleties, legal strategies, or contextual considerations that a seasoned attorney would bring to the drafting process.

While the risks associated with the use of AI in legal brief drafting are significant, it would be shortsighted to dismiss the technology altogether. AI can be a valuable tool in the drafting process, assisting with research, citation generation, and even preliminary drafting. However, it is imperative that legal professionals approach AI as an aid rather than a substitute for human expertise.

Careful manual review of citations, staying updated on legal developments, and verifying AI-generated content against reliable legal sources are crucial steps to ensure the accuracy and reliability of legal briefs. Additionally, legal professionals must uphold their ethical duty to ensure the integrity of their work, exercising caution when relying on AI-generated content.

In conclusion, the rise of AI in legal writing presents both opportunities and challenges. By embracing AI as a tool while maintaining a firm grasp on human expertise and judgment, legal professionals can navigate these risks and harness the potential of this technology to enhance their practice responsibly.