In today’s column, I will be providing commentary and coverage of a prestigious symposium on AI lawyering that was recently jointly undertaken by the Tsai Center for Law, Science and Innovation of the SMU Dedman School of Law and with Wake Forest Law via a special event entitled “AI Lawyering: Adapting to the Era of ChatGPT and Large Language Models”.
The range of topics encompassed at the symposium consisted of three major areas of focus: (1) AI in legal education, (2) AI in legal practice, and (3) AI as viewed from the bench, the ABA, and state bars. I participated as a speaker in the third session and will be sharing herein highlights of the entire event. All told this important get-together was an invigorating and notable confluence of perspectives by law firms, solo lawyers, legal practitioners, legal scholars/academics, and others stridently interested in the future of lawyering as impacted by AI.
First, let’s lay out the notable panelists and their affiliations.
The panel on AI in legal education consisted of speakers Raina Haque, Wake Forest University School of Law, Rachelle Holmes Perkins, George Mason University Antonin Scalia Law School, Dan Schwarcz, University of Minnesota Law School, and was ably moderated by Keith Robinson, Wake Forest University School of Law. This was a stirring exploration of whether or not to use generative AI in law schools and got the event off to a fast start. In a moment, I’ll be examining the keystones revealed, doing so in my own words, and will be using the points made as a launching pad to closely unpack a heated and altogether contentious and extremely timely topic.
The panel on AI in legal practice included speakers Rob Hill, Holland & Knight, Michelle A. Reed, Akin Gump, and was skillfully moderated by Meghan J. Ryan, SMU Dedman School of Law. Topics covered a wide gamut including how law firms are adopting generative AI, along with the fascinating and newly encountered twists and turns associated with having clients that either want their legal advisors to be using generative AI or taking the opposite stance. For space purposes in today’s column, I won’t have space to cover the particulars of this emerging and eyebrow-raising set of considerations, and aim to do so in a subsequent column.
The final session explored a deep variety of AI lawyering topics and consisted of Dr. Lance Eliot, Techbrium, Inc. (that’s me), Hon. Xavier Rodriguez, U.S. District Court for the Western District of Texas, Stephen Wu, Silicon Valley Law Group, and was adeptly moderated by Nathan Cortez, SMU Dedman School of Law. A slew of significant points was addressed. Again, for space purposes in today’s column, I’ll be covering the remarks in a subsequent column (please be on the look for that upcoming coverage).
Before I leap into my analysis and discussion, you might find of interest a helpful shortlist of some of my prior and ongoing series of pieces that provide a solid background on these matters, ranging from the application of AI to the law (such as the use of generative AI for performing legal services) and encompassing too the application of the law to AI (e.g., copyright issues of generative AI, AI governance and efforts to regulate AI, national and international endeavors, the U.S. Bill of Rights related to AI, etc.).
Here is a sampler list of seven of my essential pieces that adroitly convey these heady matters:
- (1) Generative AI Ramifications For Law Firms And Law Partners. Law firms and law partners need to address the rise of generative AI for performing legal tasks, and cannot avert their eyes or keep their heads in the sand on the steady march toward AI in the practice of law (see my in-depth analysis at the link here).
- (2) Big-Picture About AI & Law. This is my big-picture analysis of the forest for the trees regarding AI and the law, an altogether substantive piece identifying fifty crucial points that in-the-know lawyers and partners need to know about (see the link here).
- (3) ABA AI Resolutions. My close look at the ABA model rules and resolutions associated with AI, covering duties pertaining to how lawyers and law firms should be suitably approaching AI, including analysis of the ABA Model Rule 1.1 Comment 8, ABA Resolution 112, ABA Resolution 700, and the recently enacted ABA Resolution 604 (see the link here).
- (4) AI Hallucinations Undermining Lawyers. My detailed assessment and lessons learned arising from those two lawyers who got into hot water by relying upon generative AI when they cited AI-hallucinated legal cases in their formal court filings (see the link here).
- (5) Judges And Courts Reacting To GenAI Using Lawyers. My review and predictions about how judges and the courts are going to react to generative AI being used by lawyers and law firms (see the link here).
- (6) GenAI And The Attorney-Client Privilege. My early bird noted concerns that unaware law firms and hapless lawyers are potentially going to usurp their vaunted attorney-client privilege at times by inadvertently using generative AI in ill-advised ways (see the link here).
- (7) Prompt Engineering Techniques For Lawyers. One of my many pieces covering legal examples highlighting prompt engineering techniques when smartly using generative AI by or for lawyers (see the link here).
- And many others.
I’ve been trying to ensure that lawyers and the legal profession are kept up to date on AI.
Change is afoot for the legal world. AI is a disruptive and transformative force. You’d have to be living in a cave that has absolutely no internet connection to believe otherwise. Lawyers across all areas of the law ranging from newbie attorneys to seasoned ones are going to be impacted by AI.
I dare suggest that this is an inarguable fact, though I hesitate to ever mention that something is inarguable when tossing around verbiage with lawyers (I brazenly try doing so during my workshops, lectures, and talks). In any case, I squarely stand by the notion that the propensity of AI impacts on the legal profession are inarguable, whilst agreeing that the mainstay of debate centers on which ways and in what timeframe things will play out.
Generative AI As Used Within Legal Education
I’ve got a seemingly straightforward question for you.
It is one of those questions that on the surface might appear to be simple and ergo would presumably garner a simple answer. Lawyers though know that at times simple questions can be entirely chockfull of densely packed and extremely complicated tradeoffs. This is one of those instances.
Put on your seatbelt for this thought-provoking question:
- Should law school students be allowed to use generative AI?
There, I said it, and you might immediately have a reactive response. Some law schools have recoiled at the use of generative AI and claim it is going to undercut those budding legal beagles. Other law schools have said that they welcome generative AI. Many law schools are scratching their heads and toying with generative AI in their educational efforts.
Generally, the response overall has been quite a mixed bag.
I tend to boil down the pursuits into five major positions or approaches:
- (a) Ban generative AI usage by law students. Outrightly ban generative AI use by law students at a given law school.
- (b) Permit generative AI usage by law students. Overtly allow generative AI use by law students though keeping a tight leash on usage.
- (c) Encourage generative AI usage by law students. Actively encourage generative AI use by law students and urge them on.
- (d) Nebulousness on generative AI usage by law students. Muddle along and say nothing either way about whether your law students can use generative AI.
- (e) Other assorted GenAI usage posturing. This is a catchall for either a mixture of the above or some other variant policy and approach to generative AI for law students.
Before I dig into those various approaches, I’d like to emphasize that the usual attention is devoted to whether law students are going to be using generative AI. This says nothing about whether law faculty might or might not be using generative AI.
Few give any thought about that added possibility. If, for example, a law faculty member decides to create an exam that is entirely composed via the use of generative AI, do you see anything wrong or amiss about this action? Some would insist that this is entirely up to the faculty member to decide on. Others would decry this approach and would worry that the exam is going to be less suitable without the human touch of a law faculty member devising the exam themselves.
Give the matter some contemplative thought. Does a law faculty member who voluntarily opts to use generative AI send any kind of subliminal message to the law students, such that doing so might be in alignment with or utterly contrary to some overall GenAI policy by the law school regarding law student usage of generative AI? Here’s a wilder one. Should law schools go out of their way to entice or reward law faculty for judiciously using generative AI, thus fostering them to learn about and get involved with generative AI?
Those are all construed as bitter fighting words in the hallways of academia, which I am not going to get further into in this particular discussion (if reader interest dictates, I’ll gladly elaborate in a future column posting).
Let’s get back to the central matter at hand, namely today’s law students and their use or non-use of generative AI while in law school.
As you’ll see shortly, the matter has significant implications not just for what happens in law school but also for what happens after law school, including the future of budding legal careers and perhaps the future of the legal profession in total. Everyone should care about this. Not just academics. Not just law students. The whole legal community encompassing law firms, law partners, existing lawyers, prospective clients, existing clients, and society overall has a notable stake in this weighty topic.
Let’s start the unpacking with a sharp bang. I’d venture that the most vocal and vehement position is that law school students should be categorically and unequivocally banned from using generative AI.
Period, end of story.
You might be notably curious as to why this is a highly vocalized exhortation. The reasons are aplenty.
One stated reason is that law school students need to learn how to think like a lawyer and that the use of generative AI will usurp that essential goal. In a sense, generative AI is seen as a potential crutch. Law school students will be underdeveloped in legal reasoning because they rely upon GenAI to do so for them. All they are doing by using generative AI is sorrowfully and dangerously undercutting their own legal education. In turn, this will indubitably seem to undercut their legal career. They will be forever one step behind, due to letting AI do their legal brainiac work for them.
Worse still, the worry is that once law students graduate and go into the legal profession, they will be hooked on generative AI. It is like a habit-forming drug. When asked to put together a legal brief, the law school graduate who was using GenAI in school will be completely at the whim of generative AI. They won’t be able to write a legal brief on their own. If somehow the GenAI is unavailable, this law student is out of luck and out of order. Bad for the law practice that hired the law graduate.
This is also seemingly bad news for the legal profession overall, namely that we will be quietly releasing a barrage of law students who can’t exercise the law without their aided crutch of GenAI. All that the law graduate is good for is to enter prompts into generative AI. Imagine that the GenAI generates errors or does an AI hallucination. The AI-addicted law graduate won’t know that the generative AI is pulling the wool over their eyes. Clients are going to be upset, rightfully so. Legal malpractice lawsuits are going to go into high gear.
The bottom line, some lamentedly insist, is that generative AI as used by law students is going to drive the legal profession mercilessly in a pitiful race to the hopeless bottom of an empty abyss.
Wow, that does seem depressingly worrisome and something we need to curtail.
Hold on. Don’t toss in the towel just yet. Lawyers know that hearing one side of a contentious issue is rarely the full story. You can be lulled and insidiously leaned in one direction by powerful assertions that are tilted in one direction. We ought to keep our minds open and hear what the other side has to say.
Consider closely the retort or counterarguments to the above-proclaimed total ban.
Gear up for this.
First, generative AI is going to become an integral part of modern-day legal practice soon enough. Law firms are going to have to embrace generative AI, whether they like it or not. Clients are going to want to know that their chosen legal advisors are using the latest in AI to stake out the best and brightest of legal positions. AI is not going to fade away. It will get stronger and deeper into the legal field.
Law school students who aren’t versed in generative AI are going to find themselves out of step with what law firms are going to look for. Existing lawyers within law firms are bound to be slower to adopt GenAI. It is the nature of things. Meanwhile, the hope will be that those fresh faces out of law school will know when and how to prudently utilize generative AI.
A law school that bans generative AI is going to undermine its law students. They are denying law students a chance to stand out as they come out of law school. Indeed, this lack of experience with generative AI is likely to dampen their career pursuits. They will forever be that last generation of law school students who didn’t make use of generative AI during their legal education.
On top of those career considerations, another aspect is whether declaring a ban on GenAI during law school is sensibly feasible. Here’s what that means. A law school makes a hefty proclamation that law students cannot touch generative AI. GenAI is verboten. The administrators believe they can wipe their hands clean and be done with the matter.
Who will police this ban?
You might be tempted to say that all the law school needs to do is enlist one of those alleged GenAI detectors and use such a tool to scan any written work turned in by the law students.
Importantly, I’ve covered extensively in my column that those detection tools are not reliable and should not be used, see the link here and the link here. When it comes to text (versus detection of generated images), the detectors can be easily fooled. Do not fall for the unbridled and unfounded assertions that they work effectively. They don’t. Law school students will wise up and learn how to readily defeat the automated detectors.
There is an additional twist about the detection tools that really get my goat. I’ve had many students come to me and indicate that they were falsely accused by a detection tool of having used generative AI for writing a paper. There is in fact a disconcerting false positive rate with these tools. In other words, a sincere and non-cheating student and their whole scholastic career can be tainted unfairly. You become guilty until proven innocent. Furthermore, those who use these detection tools tend to be doggedly insistent that the tool can do no wrong. They will stand on a lofty hill and proclaim a student as having cheated, despite the readily known fact that these tools can be wrong.
I’ll add more twists.
Suppose that some of the law school students obey the ban and stridently avoid using generative AI. They are trying to do as the law school has asked them to do. Good for them, one would assume. But, meanwhile, let’s imagine that some other law school students in their same classes decide secretly to “cheat” and use generative AI at that law school (despite the ban). They get away with it.
The students who were non-cheaters will realize they are essentially likely to get lower grades or at least be more taxed to do their work, yet nobody will care. The cheaters will prevail. The temptation to cheat goes up. There are already so many pressures on law school students that having to contend with whether to give in and be like the so-called cheaters is not worthy of the angst. One would bet that ultimately most students would be drawn to using the generative AI.
This in turn creates an entire underground activity at a law school. All of these law school students are expending time and energy to go under the radar of the ban. Is the ban worth that effort? One supposes that you could claim that any law student worth their salt will readily abide by the ban and reject those who are presumably cheating. Maybe they ought to turn in the law students who are using generative AI. Yes, that’s right, the matter of using generative AI becomes a snitch fest. Go to law school and learn to snitch on your fellow students. Not a promising way to start your days becoming a lawyer.
You might be under the impression that those throes and difficulties are worth it if the alternative is that by allowing generative AI we are going to produce a generation of legal zombies that are wholly reliant on AI. It seems that nearly any kind of problematic issue during law school would be worth coping with, assuming it can stop the flow of law-deficient thinkers being pumped out into the legal profession at large.
Stop there. We need to carefully examine the logic of this proclaimed linchpin. It is sitting on extremely loose and flimsy ground.
Research studies already have provided the first indications that law students who prudently use generative AI are typically aided by the use of such a tool. Low performers appear to be especially boosted into higher levels of performance. Though such studies are still being performed, the belief by some is that this AI-boosting action is not merely a one-time fleeting affair. There is a strong possibility that the law student garners improvements in their knowledge of the law and is gaining by suitably using generative AI.
The blanket and loosey-goosey claim that law students will undercut their legal education by using generative AI is not backed by any solid empirical evidence (it is a supposition, mainly based on questionable assumptions or narrowly targeted tangential studies). It is hand waving and heretofore unsubstantiated conjecture.
As an aside, there had been similar speculation historically that the advent of word processing would undercut law school students and they would falter in becoming legal thinkers. The same is said of the adoption of legal research tools in doing law school homework. A storied history of similar qualms about automated tools for budding lawyers has permeated the legal education field for a long time.
A caveat comes in here. I hope that you carefully noticed that I said that the judicious or sensible use of generative AI is the key here (the same advice goes with word processing, legal research tools, and the like). I would generally acknowledge that if a law student uses generative AI in an unfettered and wanton fashion, the results for that law student might not be especially beneficial. They could indeed form a crutch. They could get themselves into a pickle about learning the law. All manner of disagreeable results could arise.
Given those concerns, I don’t think we ought to toss the baby out with the bathwater (an old adage, perhaps nearing retirement). Just because there might be some law students who go overboard does not mean that we should reject across-the-board the use of generative AI in a law school. That is a crazy and unfounded overreaction.
Going back to the underground use of generative AI, the odds are that any law school that tries a total ban is going to have to contend with the hidden violators. Even if a law school openly allows generative AI, this does not mean that the usage necessarily applies to in-class tests, nor would it presumably be viably useful during in-class discussions.
In short, the law school faculty will have to step up to the plate and devise suitable and sufficient means to try and gauge the true and forthright work of their law students. They will have to do more in-class activities or at least engage in means to witness firsthand the capabilities of their law students. The world is heading that way for all students in all majors across all professions. All faculty are in the same boat when it comes to assessing students in light of the emergence of generative AI.
A grand wake-up call is being sounded for schools from kindergarten to graduate school.
That’s a stark and unyielding fact.
More Food For Thought On AI In Legal Education
I trust that you can discern that an outright ban on the use of generative AI in a law school is fraught with challenges and would seem regrettably lacking in the forward-looking career growth of the students. This also would seem shortsighted when it comes to the advent of generative AI in the legal profession. Law students ought to be ready bearers and early adopters who upon graduation (or during internships or clerkships) can bring their capacities to aid law firms as they come under industry pressures to use generative AI.
Different settings will undoubtedly require that law graduates adjust accordingly. A judge seeking junior-level legal assistance might declare that no generative AI is to be used. Okay, so a savvy budding lawyer who knows generative AI would simply set aside that skill for that particular circumstance. Perhaps, though one never knows, once the judge gets to know the assistant, discussions about opening the door toward using generative AI might widen the views of the judge.
On the other hand, there is the outside chance that a judge might be purposely looking for emerging legal wranglers who do know and have tamed the Wild West of generative AI. The law student who was able to make use of generative AI now has an opportunity that other law students might not have a chance at landing. The gist is that a generative AI-savvy law student can be ambidextrous and apply their skillset when the situation presents itself.
A ban on generative AI by a law school closes doors that might otherwise be opened.
Some law schools opted to do a full ban as a means of buying them time to figure out what they should do on the thorny matter. In that case, hopefully, the ban is short-lived, unless the law school gets bogged down bureaucratically in trying to sort things out. Let’s next consider something other than a total ban. We shall assume that either there is no ban to be imposed, or that some form of alternative combination or permutation is prudently worthy of attention.
One consideration is the let it all go as a vaunted strategy.
Allowing a free-for-all about the use of generative AI by law school students would seem to tilt things toward an end of the spectrum that likewise has problems. You are bound to have some that go hog-wild and use GenAI too much. You’ll get some that avoid GenAI due to the unknown or because they fall into a clique in their class that has decided to despise generative AI. A mess is going to ensue.
A better direction would be to provide guidance on how and when to use generative AI, as clearly and openly (formally) stated by the law school. In addition, having law school policies that stipulate self-reporting on the use of generative AI would be handy too. The gist is that the use of generative AI comes out of the shadows. You might still have some cheaters here or there, but the bulk of the law students will likely be reasonably using generative AI (tending to adhere to reasonable conditions as per the policies) and not veer into the untoward forbidden territory.
This brings up various logistics considerations.
I will pose them as mindful questions.
Should a law school that allows for generative AI to be used, within stated limits, ensure that the law school students have ready access to GenAI?
This is due to the potential costs of making use of a generative AI app. Some GenAI apps are free to use, though these are typically less capable. The ones that require a subscription fee are usually better. If a law school doesn’t take this into account, there is a chance that you’ll end up with two sets or segments of law school students, namely those who can afford the better GenAI and those who cannot and must rely upon a lesser GenAI.
Should a law school provide training or other educational opportunities on how to use generative AI?
This is brought up due to the free-for-all notion of just handing out the keys to the new car and telling students to go for it. Right now, it seems unlikely that most law students have already become versed in using generative AI. They probably have toyed with GenAI, perhaps on a personal basis. They are unlikely to have deeply used it and are lacking in prompt engineering skills. In the future, you can expect that generative AI will be used during undergraduate degree programs, therefore law school students will seemingly already come with some skills in GenAI usage.
If you give a law school student a prescribed login to a generative AI app (as chosen by the law school) and do nothing else, the chances are that the student will use the GenAI briefly and then falsely presume that the AI app is of little benefit. That’s a sad result. A law school might proudly do a checkmark that they made generative AI available to their law students. Whether the law students got anything useful out of the access is a different matter.
At this time, given the overall lack of understanding about how to best use generative AI, especially in a legal context for doing legal work, a law school would be wise to accompany the generative AI with some form of explicit training or tuned educational coursework. Make the tool available. Show them how to use it. Glean feedback and keep the law students engaged and updated on usage. Do this for the entirety of their law school progression.
Don’t do a classic one-and-done. You will waste the effort and, inevitably, down the road, you will feel defeated. The thing is, you’ll need to face the music and own up that you shot your own foot or feet.
There is another element to this adoption of generative AI in law schools for law students that needs to be addressed. It is a crucial element. A mighty vital ingredient.
Law school faculty.
Will the law school faculty sit back and ignore or overlook the generative AI being used by students, or will they actively seek to incorporate the law students’ usage of generative AI into the esteemed legal educational experience?
Here’s the deal.
One approach consists of idly allowing students to use generative AI. The law school classes remain unchanged. It is up to the students to determine whether or not they want to use generative AI for any particular class they are taking, such as a class on contracts, torts, civil procedures, etc. Whatever the student does with generative AI is of no concern to the faculty, other than trying to ensure that the students abide by the law school policies and are not avidly using generative AI to do their work for themselves.
Another approach involves the faculty opting to acknowledge and incorporate the use of generative AI into law school classes. Assignments might directly advocate using generative AI for particular purposes. Class discussions might include examining what GenAI indicated on a legal matter being discussed. And so on.
This latter approach is going to be a bit of an uphill battle. First, this would require law school faculty to update their courses and go out of their way to encompass generative AI. Why should they do so? Does it aid in attaining tenure? Does it come with added pay? If there are little or no incentives, the chances are that the added work is not going to be seen as worthwhile. They already have enough to do and piling on more work without direct benefits is unlikely to be readily accepted widely.
Another concern at times expressed is that you are taking away precious and costly in-class time to delve into “tech” that otherwise distracts from learning about the law. This is regrettably true if the generative AI is poorly intertwined into classwork. No doubt about it. You see, an astute and deft hand is required to achieve a balance and ensure that any generative AI aspects considered during class time are driving toward learning about the law. Fumbling with generative AI during class time is a mistake and sits on the shoulders of those who haven’t figured out seamless ways to incorporate such tools.
Some would suggest that only certain classes that are designated as AI-accompanied law school classes would seek to embrace the overt use of generative AI. There are already classes often provided on an elective basis that deal with emerging technologies associated with the law. The odds are that this will continue for now. Until generative AI gets more fully adopted by law firms, the chance of it being directly infused into everyday law school classes is a remote bet (that day will eventually arise).
Going back to the idea of a ban, one argument is that maybe a ban should be imposed at the start of law school. This might last for a semester or quarter or could be for the entire first year for 1Ls. This might then aid in ensuring that the law school student gets a leg up on learning and thinking about the law. They are not simultaneously contending with using generative AI. Once they reach the next level of being a 2L or 3L, at that juncture they are allowed to use generative AI.
It is a clever way to split the difference or reach a compromise, one must say.
This approach has tradeoffs, as you might expect.
One concern is that the law students will go underground during their 1L and seek to use generative AI, despite being told it is banned for them. They are going to be interacting with 2L and 3L students, perhaps witnessing the advantages of using generative AI. A tough proposition will arise for them. Should they abide strictly by the first-year ban, or should they opt to cheat the ban? Ugliness once again ensues.
Another concern is that an inadvertent adverse consequence arises. Perhaps the 1Ls become convinced that generative AI is not of use to them (based solely on the first-year ban and never having used it). They then suddenly are given access when becoming a 2L. They have not previously encompassed generative AI in their law school learning. Out of sight leads to out of mind.
In a sense, it could be that leveraging generative AI is now outside of their mindset. They rejected the generative AI usage out of a proclaimed fear of forming bad habits during 1L and the repeated dogma during that time of being banned from doing so. Perhaps this leaves them shortsighted, inadvertently.
Not wanting to leave that dour impression as a lingering gasp, a hopeful condition is that it seems hard to believe that most, or at least many wouldn’t be eager to grasp hold of generative AI, once they have been given permission to do so. Hope springs eternal.
I mentioned at the start of this discussion that sometimes a simple question garners a big answer.
You’ve now seen this with your own eyes.
I might also mention that if you are wondering in what ways law school students might beneficially use generative AI, I’ve covered this in prior columns and as cited in the links near the top of this piece. It isn’t only about writing legal papers.
Generative AI can be used in a manner known as a flipped interaction. This consists of the generative AI asking the student questions about the law. Doing so can be handy for preparing for tests, including those graduating students who are aiming to take the bar exam.
Generative AI can be used to examine the work of a law school student and provide a critique or analysis of their budding efforts to write in legal ways. Imagine a law school student who has composed a practice legal brief and wants to gauge how good it is. How could they do so? Asking another law school student, if they have peers that have the time to do so, can be prohibitive and awkward. One easy 24×7 means consists of entering the draft into generative AI and getting a review from the AI (noting and being wary of any privacy or confidentiality issues, see my coverage at the link here).
Generative AI can be used to improve human skills in legal argumentation. A law school student might find handy a real-time effort to exercise the fine art of legal argument-making. This is good practice for the real world. Using generative AI, the student can enter a legal setting and ask the generative AI to pretend it is on the other side of the arguments to be made. The student then does a back-and-forth fencing match with the generative AI. A prime means of exercising our valued adversarial judicial structure. The student learns to react on their toes.
Note that I am emphasizing that generative AI can be used as a substantive aid for law students. Let’s be real. Not every student is going to come up smelling roses. A barrel nearly always has a few potential rotten apples.
Law students who decide to hand over the writing of a class-required legal brief and let generative AI do the entire thing, well, that’s not what they ought to be doing. You can assume that some law school students will take that route. It is for those reasons that a formalized policy about the use of generative AI is needed by a law school. In addition, the faculty must be on the ball to ferret out deviations from the policies, doing so with care and not getting trapped in false or misleading means to do so.
A final few remarks for now on this heady matter.
I’ve heard various legal pundits spout that it is the worst of times when it comes to the emergence of generative AI usage in law schools. They gloomily fear that we are going to produce law graduates who do not know a whit of law. That’s the assuredly sad face look at the world.
I lean instead toward the happy-face side of things.
Allow me to elaborate.
Generative AI is amply going to shake up the legal realm. Law school students ought to know what is coming. It is a backward viewpoint to fully ban generative AI and won’t do much good anyway. Make sure your law school students are versed in the future. They are likely to be the ones that in a few years after starting their legal careers and therefore will be greatly impacted by generative AI. If the law school they came from had their heads in the sand, those graduates would be caught unduly off-guard.
Law schools and the future of lawyers matter a heck of a lot to our society. I’ve tried to provide a semblance of the arguments that concern the question of whether to have law students use generative AI or not do so. The above commentary and sentiment are all mine and do not necessarily reflect anyone else’s views other than my own. I’ve tried to judiciously combine my thoughts with the numerous discussions and interactions that I have had on these topics.
As a final teaser, go ahead and get yourself ready with another handy bucket of popcorn to read my upcoming analysis and insights on some of the other major topics covered at the recent symposium. I promise you a fun and fact-filled look at additional and vital considerations for the future of lawyering and the AI lawyering transformations coming this way.
The best of times is coming for those that have their eyes open.