In March 2023, a New York attorney represented a client suing an airline company. The attorney submitted a brief citing over half a dozen cases.
When the judge reviewed the citations, the cases could not be found. The lawyer then admitted to using OpenAI’s ChatGPT to perform the legal research. The artificial intelligence tool had provided fake caselaw and fake citations.
The incident resulted in a $5,000 fine for the attorneys and law firm that submitted the brief.
The federal judge who imposed these fines stated that advances in technology are common and there is nothing inherently bad about using artificial intelligence tools, however, “existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”1
In June 2023, following this incident, a U.S. District Judge for the Northern District of Texas
issued a standing order requiring attorneys to file a certificate stating that the documents they submitted to the court were not generated by artificial intelligence tools like ChatGPT. If they were, the judge requested that the attorney certify they had checked and verified information that was generated by artificial intelligence. The court stated that it would not accept any filings that did not contain this certificate.
Incorporating Artificial Intelligence into Legal Research
Stories like these have filled the news since the emergence of generative AI tools like OpenAI’s ChatGPT in late 2022. While opinions on the use of AI may differ, the rise of artificial intelligence already has made an impact on the legal field.
Following generative AI’s introduction to the public, prominent legal databases Lexis and Westlaw both introduced generative AI tools specific to legal research. These tools are aimed at assisting attorneys with legal research by utilizing artificial intelligence to pull caselaw and other resources from their databases in response to legal research questions.
Jennifer M. Schank, U.W. 2010, is a shareholder with
Fuhrman & Dodge, S.C., in Middleton, where she focuses on debtor/creditor rights, litigation, and real estate and landlord/tenant matters.
Emilie Dozer, U.W. Law School Class of 2024, is currently a law clerk at Fuhrman & Dodge, S.C., in Middleton.
Verify, Verify, Verify
While AI has emerged as a tool for conducting research, there are risks associated with using artificial intelligence for legal inquiries.
One of the largest problems with artificial intelligence is the presence of “hallucinations.” “Hallucinations” are responses to a question that sound plausible but are untrue, like citing to cases that do not exist.
In
a recent study by Stanford University, researchers found that “hallucination” rates remain a pervasive problem in tools such as ChatGPT when used for legal questions. However, the study noted that generative AI tools specifically trained for legal use may perform better than open-source tools like ChatGPT.
For example, tools like AI-assisted research through Westlaw or Lexis use
Retrieval Augmented Generation (RAG), a tool that attempts to prevent artificial intelligence from fabricating case names or citations. These tools instead focus on the actual language of the content found in legal databases like Lexis and Westlaw unlike public, open-source tools like ChatGPT.
Artificial intelligence tools specifically tailored to legal work may reduce the risk of “hallucinations” in legal research. However, attorneys should still ensure that they are checking any findings generated by AI.
Law firms looking to incorporate artificial intelligence tools can introduce different oversight methods such as firm guidelines concerning the use of AI. Firms should also train individuals on how to use artificial intelligence tools and inform staff about the risks of “hallucinations” when using AI for legal research.
Adding safety measures like these can help firms ensure that artificial intelligence tools act as a helpful resource, rather than creating another cautionary tale against the use of this new technology.
The Ethical Concerns Around Using AI for Legal Research
Despite the widespread use of artificial intelligence, attorneys should remain aware of their ethical obligations while utilizing generative AI tools for legal research. Over the past few months several state bar associations, including
California,
Florida,
Michigan, and
New Jersey, have released opinions concerning ethics and the use AI in the legal profession.
These ethics opinions all emphasize the
duty of diligence and recommend that attorneys exercise this duty by verifying all information that is output by artificial intelligence software. The opinions explain that a lawyer’s professional judgment should not be delegated to AI. Instead, lawyers have a responsibility to critically analyze any results given by artificial intelligence in response to legal inquiries.
The risk of “hallucinations” when using artificial intelligence for legal research also raises important ethical considerations for attorneys. In their ethics opinion, the California Bar Association explained that attorneys have an ethical obligation of candor to the tribunal and submitting fake caselaw under the assertion that the findings are accurate may compromise this obligation. Attorneys should check with their jurisdictions to ensure that they are adhering to any disclosure requirements around the use of generative AI.
Another ethical duty that is largely raised around the use of artificial intelligence is an attorney’s
duty of confidentiality. Attorneys that utilize AI for legal research should consider not entering confidential client information into the AI program and avoid entering details that could definitively identify a specific client. Attorneys should review the terms of use for any AI program they utilize to obtain information on how the program uses input data. Generative AI products generally use information input into their system to train the AI software, so attorneys should research the program’s policies on data retention, data sharing, and data use to ensure that the AI program has adequate security measures for any information being entered into the system.
A Tool that is Likely to Stick Around
Despite concerns over “hallucinations” and ethical obligations, artificial intelligence tools for legal research do not seem to be going away any time soon.
As large legal databases continue to develop AI tools specific to the field of law lessons on this technology are being introduced into legal research and writing curricula. This integration of generative AI tools into law school classrooms may result in a next wave of new lawyers who increasingly utilize the technology to conduct legal research.
As generative AI has become more prominent, tools specifically tailored to legal work have become largely available through databases like Lexis and Westlaw. The availability of this technology combined with instruction on artificial intelligence in law school classrooms, will likely ensure that AI is a tool used in legal research for some time to come. Knowledge of the way this evolving technology may impact the legal field and the risks posed by AI in the form of “hallucinations” can help firms ensure that they utilize AI efficiently and effectively, while mitigating potential harms.
This article was originally published on the State Bar of Wisconsin’s
Solo/Small Firm & General Practice Blog of the Solo/Small Firm & General Practice Section. Visit the State Bar
sections or the
Solo/Small Firm & General Practice Section web pages to learn more about the benefits of section membership.
Endnote
1Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023).