18 Comments
User's avatar
Laura Creighton's avatar

I have found another problem. A great many LLMs write as if we all are part of a shared legal tradition based on English common law. But people working out of a different legal tradition have different legal norms and precedents, and, to give an example, what counts as 'negligence' here in Sweden is a very different matter than what it is in the U.K. But this may just mean I have an easier time detecting legal foreign imports than somebody in Australia who ends up inadvertantly building a case for American judicial norms. It's not just a matter of getting the legal references from the correct corpus, because the problem often is a matter of 'whether this is considered reasonable or not'. This varies, not only over time but between nations. People who do not have a deep understanding of their own legal traditions write briefs that are often silly, silly to the point where at least in Sweden we would call them negligent.

But 'get llm to write some briefs and hand them to students with the instructions "shred this" can be great fun and good for learning. Naturally overly-agreeable people have an easier time overcoming this tendency in themselves when it isn't a fellow student who is being humiliated.

Expand full comment
[insert here] delenda est's avatar

There is a very good broader point there which is that current LLMs are very American (the Chinese ones are sophisticated copies of American ones). What no-one is doing, to my knowledge, is specifically curating the datasets required to train European LLMs.

This was a side hobby idea of mine for civil law purposes, until I focused on it and realised that I was short many millions to pay for the processing power required for the training runs (and also to curate the data!) - no I am not always the sharpest tool in the shed.

:)

Expand full comment
Kate Graves's avatar

Agree with all of this, especially on essay writing as an essential part of learning.

Re. degree grades, what I tell my students is that getting a good grade is nothing more than a foot in the door to getting a job interview, if they want to actually impress the interviewer and get the pupillage/training contract they will need to show that they earned the grade and can speak intelligently about the law. So even in the best case scenario that the LLM produces an essay that is better than what they could come up with themselves all that will mean is they increase the number of interviews in which they make a tit of themselves with poor legal reasoning and knowing nothing about the subject they supposedly got a first in. I don't know how many if any pay attention to me - based on last year's marking I'd estimate that around half were written with AI.

Expand full comment
Noam D. Plume's avatar

A thoughtful and insightful article Katy. As a technology lawyer that regularly uses AI however, I am not sure that I agree entirely.

1. We accept that notwithstanding their duty to the Court, lawyers make mistakes - human error. Errors in reading, interpreting, analysing, referencing, typos, games of broken telephone and so forth. The idea that there is zero room for error in the context of litigation is simply wrong. I appreciate that AI must hallucinate by design (and this risk increases the more prompts there are in a conversation), and this introduces a new vector through which errors may be made, however it trades off those particular errors with a reduction in other human errors. There are also techniques to minimize AI hallucinations and get it to self-correct (experienced AI use, good guardrails and a 'human in the loop' to review the output). I presume this is why the Court has taken the view that it has. But this does not negate the necessity of learning the AI skillset and applying it.

2. It is true that when it comes to legal essay writing at university, students are giving up their learning for expedience - creating an essay structure and getting ideas via an LLM. As C.G. Jung said: "Beware of unearned wisdom". There is no doubt that this is a cost. However they also gain a skill - using an LLM. It is not a straightforward skillset, there are many techniques to master and it is constantly evolving. Additionally, if they fail to appropriately review their essay themselves, they are taking a large risk as they do not understand the limitations of AI and the limitations of their own skillset in using AI. The practice of law is changing drastically in the corporate sphere and those who fail to find an appropriate balance between using new legal tools like AI while retaining core legal knowledge and skills will simply get left behind.

3. Practically speaking, AI can often encapsulate the law very well these days. It can argue from a particular perspective or provide a risk-based objective assessment. It is certainly capable of reasoning by analogy (and is actually sometimes better than humans at this due to how an LLM works!) but as with anything, it needs to be sense checked by a human. The main limits to AI in my view are: the human that uses it and their ability to effectively prompt engineer, the data the AI model has available to it (A public AI like ChatGPT is a very different beast to a legal AI like Harvey), the token limitations of the AI in question which dictate how large a document it can read and output in one go and the guardrails (including human review of the output) implemented.

4. The meaning of what a good lawyer is, is changing. A colleague showed me a quote from a top tier law firm to analyse a regulation, prepare a template checklist for contract review to ensure contractual compliance, review around 100 contracts and populate the checklist and then add clauses as required. The quote was around $400k and we were told it would take a few months of work done by a large team of top tier lawyers. I did this exercise in under an hour for them using AI. We validated the work and it came back perfect. This is a law firm that goes around town spruiking its innovation credentials, its legal expertise and its commercial acumen. They're not going to be briefed very much anymore because they either know that they can use AI (and they have access to a top of the line legal AI) and aren't using it to gouge clients for exorbitant fees or they ought to know to use it and are negligent for failing to keep up with the single biggest development in the industry in a very long time.

At the moment, there are incentives within the legal industry to cast legal work as special and impervious to AI, and favour manual review. The legal industry (as any other licensed profession) is jealously monopolistic by nature. Its business model is often predicated on information asymmetry between client and advisor and it is a daily occurrence that a lawyer advises their client to do something the hard way when it can be done much quicker and easier. Why? Fees. The billable hour. Greedy partners. Accordingly, inefficiency is a feature of the system, not a bug. It has been tolerated because the pace of technological change has not clearly shown huge productivity benefits and cost savings clearly all at once, but that is changing with AI. Now, legal analysis and decision making is treated basically the same as any other form of corporate analysis and decision making (as it should - it is no more complex or special). Processes that are ancient and are accepted because 'that's what we've always done' are now being challenged, and the savings in corporate are enormous (within the ASX100, easily within the tens of millions per organization) with little to no downside risk.

There are two important quotes at the end of your essay that I also think need to be addressed:

"If lawyers outsource their thinking, writing and research to something else, they render their presence nugatory. "

I am not convinced this is the case. Firstly, the outsourcing should not be 100% (though perhaps with effective use, could be 90%). But in my experience, that is not the value of lawyers regardless. It is in asking the right questions, and that is effectively what prompt-engineering is.

"If you’re going to use an LLM, it’s incumbent upon you not to accept it trustingly, but to double check it. In which case… well, might it not be easier to do the work yourself?"

I agree with the first bit, but not the second. Yes - we cannot totally resile from all responsibility when AI does our work from us. However, no - it's not easier to do everything yourself - not even close. Actively constructing an essay structure, checking spelling, grammar, clarity, succinctness, sense checking the strength of an argument or the correctness of a statement of legal principle or its application to facts is something that takes hours and is something that AI can do in seconds. I have personally never handed in an assignment that I haven't had a final read through of so there is no incremental effort in me doing a final readthrough of AI output. The incremental effort that AI takes away from this process is immense and worthwhile.

Fundamentally I think it is high time that lawyers are trained to be efficient. I appreciate this creates problems from a grading perspective however even in such cases , some students use AI more effectively than others. Some students don't proof read, or proof read poorly, and so on. Differentiation will always exist, but the manner or focus of assessment may need to change with the times.

Expand full comment
Katy Barnett's avatar

Very interesting points. I still don’t think, in my area, that I’d trust LLMs for anything. It may depend what area you are in.

Now, where AI might be useful is in doing the really stupid administrative tasks the university constantly sets. That would make me much more efficient. But that’s another story… maybe a different rant.

Expand full comment
[insert here] delenda est's avatar

I came here (to the comments! I came to the article because I like our host’s writing) to point out that AI is already well and truly embedded in discovery, a very significant part of which is not (or certainly not immediately) linked to litigation but to regulatory investigations/queries.

But Noah makes a good point which I think our host has missed: new technology often makes an impact in terms of doing what we can do now better, faster, or both, but it often has an even greater impact in relation to what we cannot practically or physically do now. For example, a tractor tills a field perhaps 100 times faster than a human, but the real impact is that it makes large-scale farming possible which creates an agricultural surplus and allows for the vast majority of the population to cluster in vastly more productive urban areas, etc.

As Noah's contract review example highlights, so it will be with AI. Some of this will indeed be plain vanilla efficiency enhancements, with quite considerable social value. But many others will be very different! If I were Katy I would ask my students to consider the impact on our legal (and social) system of everyone having access to lawsuits at essentially court filing costs plus a small premium.

Obligatory preemptive disclaimer: AI may well be different if scaling continues to hold, and may well take all the jobs and all the resources, in which case the point is rather moot because we are dead. I’m quite cynical on this happening in the next ten years, but I see it as certainly possible within 5 years. My comments above are assuming this awkward point away.

Expand full comment
Katy Barnett's avatar

Yes, excellent and helpful comments, both. I love being challenged as to my views.

Expand full comment
Francis Turner's avatar

I'm going to add a link here for the legal discussion part of this recent post of mine - https://ombreolivier.substack.com/p/the-ai-tangle?r=7yrqz

AI for doing (electronic) discovery? great!

AI for writing a motion or similar legal doc? only if carefully reviewed.

We know lawyers will keep using AI. It's way to useful to not. But successful users of it will always use it as a research assistant and maybe the creator of a first draft and nothing more.

Expand full comment
Katy Barnett's avatar

Yes. Basically I would treat it like a very inexperienced law clerk prone to error.

Expand full comment
[insert here] delenda est's avatar

That's a good approach, but do you really? I assume that if you actually had an error-prone junior clerk you would train it on analysis of legal texts relevant to the area you want it to practice in...

Expand full comment
Katy Barnett's avatar

Yes. I don't really know how to train AI. I can deal with people.

Expand full comment
[insert here] delenda est's avatar

You very likely have colleagues who can help you with that, but this junior clerk, whilst experienced and error-prone, is also somewhat autistic and to mangle a phrase of Keating's, swallowed the whole f*ç%ing internet when he was young: you can just ask it :)

Expand full comment
Jennie Pakula Lawyer's Friend's avatar

Thanks Katy, as always very clear and thoughtful. I had to laugh when I read your account of those who think of law as just a bunch of processes and algorithms. The sheer vagueness of the process of learning law makes you realise that law is full of right-hemisphere thinking that you just can't capture in a flowchart.

Expand full comment
Katy Barnett's avatar

It's all fuzzy. I have to say to my students - there is no right answer, there are a range of right answers to some questions. And it all depends how you argue and justify it. This makes my mathematician mother feel ill, but there you go...

Expand full comment
[insert here] delenda est's avatar

Your mother is probably well placed to demonstrate to you how LLM's do not have anything resembling a flowchart in them (sometimes they have something a bit similar in their interfaces but that is another thing). Their mathematics is quite "fuzzy" in one sense: every concept (token) they have ever encountered is related to every other one, but each one differently, and the relationship varies according to the concept(s) that just came before.

Not that different to our right (or left!) hemispheres ;)

Expand full comment
Neil Foster's avatar

Thanks Katy! I completely agree, and will share with my current students!

Expand full comment