“How do you get such great results?” I asked.
“Fifteen years of litigating prepared me to summarize details and ask questions newbies can’t fathom,” she explained.
We hear over and over that lawyers must train and develop AI skills. But let’s take a moment to reflect on the fundamental skills lawyers bring to the table. Lawyers with years of courtroom and interrogation experience possess exceptionally sharp abilities.
Even without courtroom or litigation experience, lawyers can often use skills picked up at law school, during real-life legal training, and throughout their legal careers to enhance the quality of generative AI output. The same legal skills also help them ensure the output’s validity. Here’s how various legal skills can help give you a leg up when creating and evaluating AI-generated content:
Ask Direct And Targeted Questions
Training in asking precise questions designed to elicit specific details from witnesses or opposing parties applies directly to interacting with generative AI models. Pose clear and focused questions to obtain highly relevant responses. Apply the same expertise to identify gaps or inconsistencies in AI-generated information.
Master Context And Provide The Complete Picture
Similarly, skills in communicating, analyzing, and understanding the context of a situation are invaluable when working with generative AI. Use them to effectively frame questions oriented to a specific person, place, time, and other contextual factors. Detect subtle nuances that alter the meaning of words and phrases. Whittle out the extraneous details to input specific and relevant prompts. Do the same to cut irrelevant fluff from AI’s responses.
Sift Out Illogical And Unsupported Details
The ability to spot logical fallacies, inconsistencies, and weak arguments is another valuable skill for ensuring AI-generated outcomes are coherent, logical, and reliable. You can more critically evaluate AI-generated responses and identify flawed reasoning such as ad hominem attacks, straw man arguments, or circular reasoning. Identify weak opinions and unsupported claims. Flag conflicting information and contradictory statements. Then, refine your questions to seek clarity and generate more robust and reliable information.
Cross-Examine AI To Root Out Inaccuracies And Bias
Skills used in assessing the credibility and reliability of witness statements during cross-examinations translate well to evaluating AI’s outputs. AI models, or the datasets used to train them, can introduce inherent biases that affect the reliability of the generated information. Sometimes, generative AI tools produce false and misleading information.
To ferret out those instances, treat AI-generated information like a witness you’re about to cross-examine. Scrutinize the sources cited. Fact-check details and verify their accuracy. Cross-reference information with other trusted sources such as legal databases, scholarly articles, or industry authorities. Seek corroboration from multiple sources. Use all your skills in detecting personal interests or prejudices to assess and ensure accuracy and reliability.
Keep AI Models On The Right Track
Experience working with legal precedents helps you establish the appropriate background and guardrails for AI. Provide relevant case law and legal principles to guide the AI model’s thinking process. Incorporate precedents into the conversation to keep AI models on the right track.
As you can see, your real-world legal training, skills, and experience can enhance your digital interactions — especially those with AI. Though there’s much to learn about AI, your distinctive legal knowledge and critical thinking skills can boost your ability to ensure AI-generated content aligns with legal standards and expectations.
Have you embraced generative AI as a productivity tool?
How do you think AI will change the way you practice law in the future?
What skills do you rely on when using generative AI?