Industry reports released in November document the rapid rise of generative artificial intelligence (GenAI) in legal practice. According to Clio’s 2024 Legal Trends Report, AI adoption among legal professionals has grown from 19% in 2023 to 79% this year. Similar findings from Wolters Kluwer show that more than two-thirds of attorneys now use GenAI at least weekly, with about one-third using it daily.
The American Bar Association (ABA) has recognized this dramatic growth with the recent issuance of Formal Opinion 512, providing ethical guidance to attorneys on the use of GenAI tools. Although lawyers need not become experts on GenAI, they must develop a reasonable understanding of both the capabilities and the limitations of any AI tools they use, according to the ABA. “Even in the absence of an expectation for lawyers to use GAI tools as a matter of course, lawyers should become aware of the GAI tools relevant to their work so that they can make an informed decision, as a matter of professional judgment, whether to avail themselves of these tools or to conduct their work by other means.” In other words, competent representation requires attorneys to understand GenAI sufficiently to make informed decisions about its use – even if they ultimately choose not to incorporate these tools into their practice.
While many lawyers have begun exploring and even embracing GenAI, others are still considering whether and how to use it. This article offers advice for approaching GenAI in legal practice, helping attorneys become sufficiently informed to make decisions about adoption while guiding those who choose to implement these tools toward ethical and effective use. The article examines types of GenAI tools and key policy considerations, and provides a step-by-step approach to building competence.
Types of GenAI Tools
There are several GenAI applications relevant to the practice of law. While GenAI can create images, synthesize voices, and even generate video content, text generation through large language models (LLMs) is currently the most relevant and widely used application in legal practice. LLMs process and generate text by analyzing vast amounts of training data, enabling them to assist with tasks ranging from document review to legal research and drafting. Like word processors, grammar checkers, and practice management software, LLMs are powerful tools to support legal practice – but they don’t replicate human reasoning or judgment and should be viewed as aids to, not replacements for, attorney expertise.
Bonnie Shucha, U.W. 2014, is a University of Wisconsin Law School associate dean and director of the law library. She is a member of the State Bar of Wisconsin’s Communication Committee and the Law Librarians Association of Wisconsin (LLAW).
General-purpose LLMs, such as ChatGPT, MS CoPilot, and Google Gemini, are widely available in both free and pro versions. They can assist with a variety of legal tasks including idea generation, research, drafting, and summarizing documents. While they are capable of generating sophisticated and accurate responses, they may also hallucinate incorrect information that sounds equally authoritative. Additionally, these models can reflect biases present in their training data, particularly around gender, race, age, and other protected characteristics. Therefore, while thorough vetting is essential for all LLM outputs, it’s especially important when using general-purpose LLMs.
Legal-specific LLMs, such as Lexis+ AI and CoCounsel (Westlaw), combine LLM technology with the authoritative legal content available in their respective databases. This integration, known as retrieval augmented generation (RAG), allows the LLM to access and cite current legal sources when generating responses.
While RAG can help reduce hallucination risks, a response from a legal LLM might still be incorrect, or more subtly, may describe the law accurately but cite a source that doesn’t support its claims. Also, like their general-purpose counterparts, legal LLMs can reflect broader societal biases in their outputs. Therefore, while legal LLMs provide additional safeguards through their integration with established legal research platforms, careful vetting of both content and citations remains essential.
Before You Begin: Review Use & Privacy Policies
Before using GenAI tools, attorneys should review policies that govern their use. Building on principles established in the Model Rules of Professional Conduct, ABA Formal Opinion 512 provides specific guidance for AI use in legal practice. The opinion clarifies that attorneys must demonstrate competence when using GenAI tools, protect client confidentiality, disclose significant AI use to clients, verify accuracy of AI-generated content, ensure that the firm’s personnel and vendors are following the guidelines, and charge reasonable fees for AI-assisted work.
Some courts have established policies about AI use in legal filings. This may include disclosure when such tools are used and certification that content has been personally vetted. Some courts also require identification of the specific AI tool used and how it was employed. While approaches vary by jurisdiction, they all stress that attorneys bear ultimate responsibility for verifying and ensuring the accuracy of AI-assisted work.
Attorneys should also review any internal organization policies governing the use of GenAI. Such policies may specify approved AI platforms, required trainings, documentation requirements for AI use, quality control measures, and billing guidelines for AI-assisted work. If your organization doesn’t have a policy on GenAI use, you should consider developing one.
Finally, carefully review GenAI vendor privacy and security policies. Look for details about data handling, storage, and protection, including data-retention periods, encryption standards, and whether your inputs might be used to train the vendor’s models. Although legal-specific LLMs often provide enhanced security measures because these vendors understand that confidentiality is essential to legal practice, it’s still important to review their policies.
Getting Started with GenAI
Sometimes privacy can be controlled at the individual settings level, so check any individual settings before using a GenAI tool. Look for privacy controls that manage conversation history, data retention, and whether your inputs can be used for model training. Some tools also offer settings such as “temperature,” which controls how creative or focused the AI’s responses will be – lower settings produce more consistent, conservative outputs while higher settings allow more variety and creativity.
Once you’ve configured your settings, you’re ready to begin querying the LLM. Start with low-stakes tasks to build familiarity with how the tool performs. With general-purpose LLMs, try something personal and familiar, such as asking for recipe suggestions based on ingredients in your refrigerator. Similarly, when using legal-specific LLMs, begin with a hypothetical scenario in an area of law you know well. This allows you to evaluate the quality and accuracy of the responses based on your existing expertise, helping you understand the tool’s strengths and potential weaknesses before using it for client matters.
The fundamental principle of “garbage in, garbage out” applies to LLMs – the quality of your inputs directly affects the quality of your outputs. Prompting, which involves crafting clear instructions to guide the AI’s response, is crucial for getting reliable results. Your prompt should specify the type of response desired (like researching telehealth regulations, analyzing a commercial lease, or drafting an email message) and provide enough context to guide that task. However, never include confidential client information in a prompt.
Even expertly crafted prompts don’t guarantee accurate answers so every output must be carefully verified against authoritative sources. Pay particular attention to cited cases, statutes, or regulations – LLMs can occasionally fabricate citations or misstate legal principles, even when the response appears authoritative and well reasoned. Develop a systematic approach to verification, treating AI-generated content as a starting point for your analysis rather than a final product.
Takeaways for Ethical and Effective GenAI Use
GenAI offers powerful capabilities that can enhance legal practice. Understanding these tools and their implications is increasingly important for legal professionals. This article provides a foundation for evaluating whether and how to incorporate GenAI ethically and effectively into your practice. If you choose to use GenAI, success requires a measured, thoughtful approach.
Begin with low-stakes tasks for which you can easily verify results based on your existing expertise. As you build confidence with the tools, gradually expand to more complex applications. Throughout this process, maintain rigorous verification practices and ensure compliance with ethical guidelines, court rules, and organizational policies. While the rapid evolution of GenAI may feel overwhelming, taking these steps will help you harness its potential while upholding your professional obligations and protecting client interests.
» Cite this article: 97 Wis. Law. 29-31 (December 2024).