As artificial intelligence (AI) increasingly penetrates the legal profession, it brings both opportunities and challenges. One emerging issue in this tech-enabled landscape is that of AI hallucinations—instances where AI systems, specifically language models like GPT, generate or invent information that does not exist in reality.
![](https://static.wixstatic.com/media/11062b_e6d3f7edd5ec4991a50fb7482ef0f9e7~mv2.jpg/v1/fill/w_980,h_735,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/11062b_e6d3f7edd5ec4991a50fb7482ef0f9e7~mv2.jpg)
Understanding AI Hallucinations
AI hallucinations refer to instances when AI systems, especially generative models like GPT, invent or fabricate information that doesn't exist or is factually incorrect. They are an artefact of the AI's training process and its lack of understanding of real-world information.
Generative AI models, like GPT, are trained on extensive datasets containing large amounts of text from books, websites, and other sources.
During this process, the AI learns patterns and structures in the data, enabling it to generate new content that mirrors these patterns. However, the AI doesn't have an understanding of the world or context in the same way humans do. It doesn't know what is true or false, or what is current or outdated. It merely creates outputs based on its learned patterns.
Therefore, the AI might sometimes generate information—like a nonexistent case citation or a fictitious legal precedent—that it has no way of verifying. These are known as AI hallucinations.
Recognising and Mitigating Risks
Legal practitioners must remain aware of the potential for AI hallucinations, given the crucial role that accurate information plays in legal processes. The following are a few strategies they can employ:
Critical Evaluation: Lawyers must exercise skepticism and critical evaluation when using AI tools, including scrutinising the information they generate. They should be especially wary of uncommon case names, unusual legal arguments, or unfamiliar legal precedents, as these could potentially be AI hallucinations.
Corroboration: To ensure the authenticity of the information, it is important to cross-check AI-generated data against trusted legal databases or primary sources. This corroborative step can help to avoid the accidental use of fabricated information.
Using Multiple Tools: Using different AI tools or systems can also help. If two or more tools provide the same information, it increases the likelihood of the information being accurate. However, it's essential to remember that this doesn't replace the need for corroboration from primary sources.
Continuous Learning: AI technology is constantly evolving, and understanding its capabilities and limitations is vital. Lawyers should invest in training or resources to better understand AI tools, including their potential for hallucinations.
Ethical Considerations: Lawyers have an ethical obligation to ensure the accuracy of the information they use and present in their work. This includes an understanding that AI tools, while powerful and useful, are not infallible and may sometimes provide incorrect or fabricated information.
To illustrate, consider the regrettable incident involving Steven Schwartz, an attorney at the firm Levidow, Levidow, & Oberman. Schwartz relied on the generative AI ChatGPT to supplement his legal research, a practice becoming increasingly commonplace in law firms. However, in this instance, ChatGPT generated "citations and opinions" that were not authentic. These included references to non-existent cases, leading to the submission of misleading information in the US District Court for the Southern District of New York.
Schwartz admitted his reliance on the AI-provided legal opinions without confirming their authenticity, a misstep that highlights a critical risk of AI hallucinations: the potential to inadvertently mislead or misinform, especially when AI is used in sensitive fields such as law.
AI hallucinations are often a by-product of AI's training process.
Generative AI models, like GPT, are trained on vast corpuses of text data, learning patterns and generating new content based on them. However, they don't understand the world or have access to real-time information. This lack of understanding and context can lead to the generation of factually inaccurate or entirely made-up information.
For legal practitioners, these AI hallucinations can be particularly hazardous. Misplaced reliance on AI-generated information can lead to incorrect legal arguments, misinterpretations of law, and potentially severe ethical and professional consequences. For instance, citing a non-existent case or legal precedent, as seen in the Schwartz incident, can undermine the credibility of a legal argument and damage a lawyer's professional reputation.
So, how can legal practitioners leverage AI while avoiding the pitfalls of AI hallucinations?
Double-Check: As with any source of information, it is imperative to cross-verify the information provided by AI. This means checking case citations, legal precedents, and any other legal information against reliable legal databases.
Use AI as a Supplement, Not a Replacement: AI should be used to enhance, not replace, traditional legal research methods. Human intelligence and professional expertise cannot be replaced by AI.
Stay Informed: As AI continues to evolve, staying informed about its strengths, limitations, and potential risks is crucial. This includes understanding how AI works and the potential for AI hallucinations.
Conclusion
While AI offers promising potential to augment legal research and practice, it is not without risks. As the case of Steven Schwartz reminds us, awareness and careful management of these risks are vital to effectively leveraging AI in the legal profession. By understanding and navigating the phenomenon of AI hallucinations, legal practitioners can utilise AI as a powerful tool without falling into its potential pitfalls.
Sign up to our introduction to Prompt Engineering Course here.
Team AI Legal Technology,
Congrats🎉 for the work.
AI has vast, yet untapped programs, for Commerce Professionals.
Professions such as Chartered Accountants, Company Secretaries, Cost Accountants, Advocates, MBA have huge potential. AI can work in this direction and help address, alleviate their mundane job's (90%) give advice for their remaining (10%) time, when needed the most.
If my services are needed, I will feel blessed, grateful.
naveen bhatnagar
Author- Demystifying AI
https://amzn.eu/d/9diEU4V