AI could improve efficiency and accuracy while reducing costs, but law firms, companies, and clients should understand its limitations
GPT-4, the “engine” behind ChatGPT, just passed—well, really, aced— the Uniform Bar Exam. It scored in the 90th percentile (i.e., it did better than 90% of test takers), and, while the threshold to pass the bar varies based on the state, the states that accept the Uniform Bar Exam require minimum scores between the 44th and 73rd percentiles.
What does this mean for the legal profession, and is AI really going to replace your lawyer? The short answer is no. However, AI has already begun to have a significant impact on the practice of law, and it is likely to continue to transform the legal industry in the coming years.
What is AI?
“AI” stands for artificial intelligence, a branch of computer science meant to simulate human intelligence. Though the name might suggest that AI tools are capable of engaging in human-like reasoning to produce original ideas, the reality is, at least for now, that such technology typically trains models based on large sets of data and identifies patterns and other characteristics. Chatbots like ChatGPT in particular leverage large language models trained to generate human-like text based on user prompts.
AI Is Already Impacting the Practice of Law
The following are a few ways AI is already impacting the practice of law:
- Contract analysis and drafting: AI can be used to review and analyze large volumes of legal contracts quickly and accurately, saving time and reducing errors. Such tools can analyze past contracts and suggest legal language that has been used in similar situations. It also can go a step further by generating contracts, which is a relatively safe area in which to employ generative AI since law firms can draw on large pools of standardized templates and precedent. Such work product is more reliable and consistent than unstructured text outputs.
- Legal research and motion production: AI-powered legal research tools can help lawyers find relevant case law, statutes, and other legal materials more quickly and accurately than traditional research methods by human reviewers. It can also produce initial motion drafts that reference relevant case law, present arguments, and even predict and rebut opposing counsel’s arguments.
- Client communication: AI-powered chatbots and virtual assistants can help lawyers communicate with clients more efficiently and provide answers to common legal questions. AI-powered translation tools also have demonstrated a capacity for translating large amounts of text rapidly and efficiently, at a much lower cost than human translators.
Risks of Employing AI in Legal Practice
While there are clear benefits to using AI in the legal industry, law firms, companies and individual clients should be aware of the risks and potential negative consequences associated with relying on and deploying AI-powered tools, including the following:
- Lack of nuance: AI algorithms may be able to generate legal advice, but they do not yet have the same level of nuanced understanding, creativity, and judgment that human lawyers possess when it comes to interpreting and applying legal principles and precedent. Additionally, there are risks of hallucinated results created when the algorithm generates output that confidently states a result that sounds plausible but is in fact made up and potentially erroneous and/or misleading, and of bias if the training data for the applicable model reflects bias. If an AI-generated result fails to consider relevant information or otherwise leads to a negative legal outcome that could have been avoided with reasonable human evaluation of the information, it could also lead to legal liability for the lawyer using the tool. To avoid this, it is essential to ensure that AI algorithms are thoroughly tested and audited and that the data they use is unbiased and accurate.
- Privacy concerns: AI-powered tools that collect and analyze large amounts of data can raise privacy concerns, particularly if the data submitted to the tool contains personal information and/or is otherwise sensitive or confidential. For example, if a user of an AI-powered tool shares personal information, such as an individual’s name, address, phone number, or email address, or other sensitive client information, that information could potentially be stored in the chat logs or used to train the model, which could in turn pose privacy risks, particularly if the logs or model were accessed by unauthorized third parties. Sharing personal information with providers of AI tools in the absence of legally compliant data processing agreements can also raise legal compliance concerns where the data is subject to strict privacy and data protection laws such as the European General Data Protection Regulation and/or the California Consumer Privacy Act.
- Lack of transparency: Another challenge with AI-powered legal tools is the lack of transparency in how they operate. While these tools are capable of generating accurate results, the process behind the outcome is often opaque, making it challenging to understand how the algorithm arrived at its conclusion. AI-powered tools run the risk of amplifying bad legal work if they are trained on poorly drafted or biased material. Therefore, it is crucial to train AI models on high-quality data and ensure that the algorithms are regularly updated to reflect changes in the law.
* * *
The use of AI in law has already arrived, and it is almost certainly here to stay. Overall, AI has the potential to make the practice of law more efficient, accurate, and cost-effective. However, law firms, companies, and individual clients should understand the limitations of AI and ensure that attorneys are transparent with clients about their use of AI. It also is important that practitioners have strict internal policies and guidelines in place when using AI in legal practice to ensure that AI-generated legal advice is used only under the supervision of licensed attorneys to supplement and not replace human legal expertise.