AI vs Lawyers
October 26, 2018 12:19 PM   Subscribe

Hackernoon is reporting a study of AI vs team of "top US corporate lawyers" in NDA test "In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals." Large differences in time and accuracy are claimed.

Benefits claimed:
"The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes. "
posted by aleph (38 comments total) 13 users marked this as a favorite
 
Eh. I would imagine that if the lawyers could be stripped of the training that is given to them in being slow and methodical so as to avoid malpractice claims, that number would drop substantially (though obvs not to 26 seconds). Algorithms don't care. Good luck suing an algorithm. (And good luck having your algorithm pick up on the newest forms of trickery, as opposed to the ones they've been trained against.)
posted by praemunire at 12:39 PM on October 26, 2018 [8 favorites]


The meat lawyers didn't even pick up the old forms of trickery never mind anything novel.
posted by zeoslap at 12:59 PM on October 26, 2018 [1 favorite]


i think the many articles about how AI is targeting law jobs next are way, way overblown. issue spotting, sure, and maybe some aspects of legal research, and perhaps even writing boilerplate contracts. but what about writing unique pleadings and motions that will be read by a human judge or heard by a human jury (which we arent getting rid of ever, obviously, for social reasons), or oral arguments in court? what about talking to clients who are distressed? and this doesnt even touch on criminal law, where human lawyers are a matter of constitutional right, regardless of whether an AI could do the job better.

maybe tech will help fewer lawyers handle more cases, so the profession will shrink a bit. but i think harbingers of doom re the white collar AI economy are probably more on point when they focus on professions where the duties to clients are weak and mathematics and statistics are particularly helpful, like investing.
posted by wibari at 1:00 PM on October 26, 2018 [2 favorites]


Algorithms don't care. Good luck suing an algorithm.

The client doesn't sue the algorithm (or LawGeex). The client sues the lawyer that relied on LawGeex, if it can be shown that using it didn't meet the standard of care. My own opinion as an attorney, developer working in the legal AI space, and lecturer who has taught a course on legal technology is that we are only a few years away from it being malpractice not to use these kinds of tools.

And good luck having your algorithm pick up on the newest forms of trickery, as opposed to the ones they've been trained against.

The usual workflow for these kinds of contractual review products has three parts:

1. Identify any clauses that are known to be legally suspect or detrimental to your client's side of the agreement.
2. Identify any clauses that are not known to be standard. These would then be reviewed in more detail by the human attorney. This is the part that helps catch new trickery: not directly, but by directing the human's attention to something the AI doesn't recognize.
3. Identify any clauses that should be there in a standard document but aren't present.

These kinds of products have a couple of major use cases:

1. Allows at least some level of review of documents that normally don't get reviewed at all because it costs too much.
2. Allows humans to review literally every document in complex cases (e.g. mergers & acquisitions due diligence) where before they would only review a sample.
3. Allows human to be more efficient and less error prone because they don't get fatigued by looking at hundreds of pages of dense legal boilerplate. The AI directs them to the most important or unusual parts of the document.

AI-based tools like this are already standard in ediscovery, and they are rapidly becoming standard in other areas such as M&A due diligence (e.g. products like Luminance and Kira). That is one of the reasons behind a lot of recent consolidation among large law firms. A couple of years ago the large firms started seeing the writing on the wall and realized that pretty soon there simply wouldn't be enough of that particular cash cow to go around.
posted by jedicus at 1:01 PM on October 26, 2018 [41 favorites]


The nice thing about an AI lawyer is that you can kill it with the flick of a switch. Shakespeare would approve.
posted by CynicalKnight at 1:05 PM on October 26, 2018 [2 favorites]


Oddly enough, I was just reading a story about how AI vision is fooled by optical illusions, just like humans. (It is, in fact, a pretty neat insight into the similarities between human vision processing and AI vision processing!) So now I'm imagining contracts illustrated with optical illusions to distract the AI reviewers...
posted by RedOrGreen at 1:09 PM on October 26, 2018


Static analysis for legal documents is a no-brainer since well crafted laws (HAHAHAHAHAHA) are pretty much just logical statements (HAHAHAHAHAHA). But seriously, this kind of static analysis should be good for 90% of the contracts fed into it...
posted by mikelieman at 1:10 PM on October 26, 2018 [2 favorites]


Metafilter: The meat lawyers didn't even
posted by lalochezia at 1:11 PM on October 26, 2018 [5 favorites]


The nice thing about an AI lawyer is that you can kill it with the flick of a switch. Shakespeare would approve.

And there's the motion for an injunction against shutting off the power pending litigation on the the issues, "Can I turn off my AI lawyer now that I've paid its fees.?"
posted by mikelieman at 1:11 PM on October 26, 2018


hat about writing unique pleadings and motions that will be read by a human judge or heard by a human jury

What if the AI can read your draft pleadings or motion and suggest cases that you haven't cited but probably should based on the content of your draft and the other cases you cited? That's what Casetext's CARA does.

Much of legal AI is about augmenting and accelerating the human lawyer and helping them not make mistakes or overlook details. It also tends to cut costs, which will help make legal services more accessible to the non-wealthy. Sometimes it's aimed directly at cutting or even eliminating costs, such as what DoNotPay is doing with parking tickets, eviction, and other common legal issues.

A huge part of the push to adopt AI doesn't come from the firms themselves. They're generally very conservative and would be happy to continue to make a lot of money via their existing business model. The push is coming from the clients, who would prefer to pay less for more and better work.
posted by jedicus at 1:12 PM on October 26, 2018 [12 favorites]



"Now I am become Python NLTK, the destroyer of legal worlds”
posted by Damienmce at 1:15 PM on October 26, 2018 [7 favorites]


Also when the going gets pattern matched, the pattern matched get circumlocutory.
posted by Damienmce at 1:19 PM on October 26, 2018 [1 favorite]


ALL. God save your majesty!

JACK CADE. I thank you, good people:- there shall be no money; all shall eat and drink on my score; and I will apparel them all in one livery, that they may agree like brothers, and worship me their lord.

DICK. The first thing we do, let's replace all the lawyers with manufactured automata.

ALL. [crickets]
posted by CynicalKnight at 1:26 PM on October 26, 2018 [1 favorite]


"Now I am become Python NLTK, the destroyer of legal worlds”

In the beginning was the Word, and the Word was "WHEREAS".
posted by The Bellman at 1:29 PM on October 26, 2018 [10 favorites]


Once again, David Brin's Earth envisioned this, even if he might not have actually invented it:

But even though no geeps were watching now, dozens must have recorded both parties converging on this spot... chronicle's they'd happily zap-fax to police investigating a brawl after the fact.

Not that fighting was strictly illegal. Some gangs with good lawyer programs had found loopholes and tricks. Ra Boys, in particular, were brutal with sarcasm... pushing a guy so hard he'd lose his temper, and accept a nighttime battle rendezvous or some suicidal dare...

posted by Apocryphon at 1:39 PM on October 26, 2018


Please note the article was written by a PR guy for the company who sells the given product.

Also note that the “AI” may just mostly be very basic text processing, like searching for specific patterns or finding phrases that are not found in a large corpus of standard contracts. (I.e. automated Ctrl-F). Such simple techniques do shockingly well on many seemingly complex tasks, e.g. can beat a median human player in trivia games.

There will definitely be automation of the legal profession but people hear that and they imagine a big computer in court going “bleep bloop your honor”. The reality is just a lot of clerical work can be replaced by computers using very boring techniques that aren’t all that intelligent.

(Not that there are no sophisticated techniques that could be applied, of course, just that a lot of the big gains don’t come from them, and many companies really use the term “AI” for marketing purposes.)

In a similar way, people hear “robotics” and imagine Johnny 5 or whatever when the reality is more like a partly automated assembly line.
posted by vogon_poet at 1:46 PM on October 26, 2018 [4 favorites]


Eh. I would imagine that if the lawyers could be stripped of the training that is given to them in being slow and methodical so as to avoid malpractice claims, that number would drop substantially (though obvs not to 26 seconds).

I think this is the wrong way of looking at it. You have these algorithms that can spot a bunch of easy stuff. Lawyers should be using these as a first pass to save time. If the algorithms find nothing, then the human dives in and see if there is anything more subtle going on.
posted by It's Never Lurgi at 1:52 PM on October 26, 2018 [4 favorites]


One note: Hackernoon isn't really "reporting this"; the article on Hackernoon was written by Jonathan Marciano, who just so happens to be... Director of Communications at LawGeex, the startup that produced the AI (and the study) being reported.
posted by danhon at 2:01 PM on October 26, 2018 [15 favorites]


"I think this is the wrong way of looking at it. You have these algorithms that can spot a bunch of easy stuff. Lawyers should be using these as a first pass to save time. If the algorithms find nothing, then the human dives in and see if there is anything more subtle going on."

This is of course what's threatening to a) a set of lawyers, and b) those who run law firms. What percentage of a firm's business is done by juniors/associates and billed out at a pretty exorbitant rate that could be done by this algorithm *and* there isn't anything more subtle going on?

My suspicion is that frequently [citation needed, etc] there probably isn't anything more subtle going on.

Now, you're still going to need a lawyer to tell you what the contract *means* from a business and practical point of view, and I don't see LSTM text-generation RNNs writing client memos anytime soon :)

Disclosure: ex-lawyer.
posted by danhon at 2:17 PM on October 26, 2018 [1 favorite]


Mechanical interpretation of the law is gonna be a hell of a thing.
posted by ethansr at 2:26 PM on October 26, 2018 [1 favorite]


One note: Hackernoon isn't really "reporting this"; the article on Hackernoon was written by Jonathan Marciano, who just so happens to be... Director of Communications at LawGeex, the startup that produced the AI (and the study) being reported.

Yeah, there's a very strong element of "consider the source" here. But speaking to legal AI more broadly, there have been many independent studies of the effectiveness of AI-driven document review in the electronic discovery context. By 2011, AI was already both faster and more accurate than human review. Grossman & McCormack, Technology-Assisted Review In E-Discovery Can Be More Effective And More Efficient Than Exhaustive Manual Review, 17 Richmond J. L. & Tech. 1 (2011) [pdf].

Subsequent rapid development in natural language processing has led to even further improvements, but the very best of those systems still put a human in the loop, essentially having a human teach the machine whenever the machine comes across a document it can't confidently classify.

Fundamentally there's not that much difference between what AI does in ediscovery document review (is this document responsive to the request? is it likely privileged?) and what AI does in contract review (is this clause standard? is it suspicious?). The main difference is that contract review requires a large body of reasonably high quality contracts to serve as the baseline, but it only takes a few large firms getting on board to provide all the training data an AI firm needs.
posted by jedicus at 2:35 PM on October 26, 2018 [8 favorites]


I think this is the wrong way of looking at it. You have these algorithms that can spot a bunch of easy stuff. Lawyers should be using these as a first pass to save time. If the algorithms find nothing, then the human dives in and see if there is anything more subtle going on.

I see a number of comments to this effect, which I think, based on the way analogous products in other fields are hyped and sold, is a little bit innocent. These pitchmen will try to sell their product direct to in-house counsel as allowing them to avoid expensive external review. But of course only the big complex contracts that need to be reviewed in detail generally get kicked out to expensive external review to begin with. In those cases, the problem isn't speed or volume, it's reliability.

Predictive coding and the like (which operate in areas that do suffer from speed/volume bottlenecks) have certainly come a long way in the last few years. But AI has been so ferociously overhyped in every field of endeavor into which it's been imported (note here, the effective turning of a press release into an article) that big skepticism is called for here. Sometimes it seems there's no solid middle ground between crusty old lawyers who still fear and resent word-processing software and credulous goofuses who are desperate to throw around the latest tech buzzwords to their clients.
posted by praemunire at 2:39 PM on October 26, 2018 [4 favorites]


>clerical work can be replaced by computers using very boring techniques that aren’t all that intelligent.<

So like some lawyers I know then?
posted by twidget at 2:50 PM on October 26, 2018 [1 favorite]


So then there's going to be an arms race between these and another set of algorithms for writing contracts that don't draw attention to their worst parts. Like how spam tries to avoid spam filters.
posted by RobotHero at 2:56 PM on October 26, 2018 [2 favorites]


I look forward to the day when somebody introduces a Little Bobby Tables language into a contract, and pwns LawGeex artificial intelligence.
posted by jzb at 2:56 PM on October 26, 2018 [5 favorites]


I also think there's an interesting and significant difference in the tasks of predictive coding and whatever we want to call this.

One of the reasons predictive coding can outperform human review under the right parameters is (as with driving) that humans are not very good at document review. At least, the humans who do first-pass document review these days. Overseeing a flotilla of discovery attorneys is enough to drive a person to drink. But all the software does is identify documents that appear to be responsive to a request and (on the flip side) group together documents conceptually. I mean, I say "all." That's actually a lot. It's more than is practicable to do through human review in large cases. And it's pretty cool to play with the wheel. But it's really all about how the documents relate to each other. And even then it's only the start of a litigation. The software can't put the story together for you and it can't apply law to facts.

When it comes to contracts, determining whether or not this or that standard provision is present is a trivial task. In-house paralegals can do it. Frankly, an average-intelligence college undergrad (who's familiar with the type of contract) can do it. Extremely complex securities transactions are often framed as addendums to industry-standard term sheets. What meaningful contract analysis does is relate contract provisions to the existing situation and to the various and sundry ways a situation might change. Litigation most often results when a scenario the parties didn't anticipate and provide for (or mistakenly thought was so unlikely that it wasn't worth spending the money to negotiate over) in the contract arises. How is an algorithm going to identify that possibility? Basically, either the task is sufficiently simple that adopting such tech is unlikely to yield huge savings or it's too complicated because it involves too complex an interaction between text and reality.

(It's also worth noting that NDAs are conceptually simple and relatively standardized across industries. And there are a lot of them floating out there in the wild. Getting a high-quality, up-to-date data set that's actually relevant to some high-stakes contract in hand to train your software against will not be so simple.)
posted by praemunire at 3:00 PM on October 26, 2018 [1 favorite]


first, they came for the lawyers

wait no that's not how it goes
posted by mwhybark at 3:50 PM on October 26, 2018


Didn't they used to call algorithm-based systems with a certain amount of industry specific knowledge like these by the moniker "expert systems" ?
posted by some loser at 5:48 PM on October 26, 2018 [2 favorites]


What'll happen here is that the AI will work great until something changes—new types of issues appear, or fashions in the way NDAs are structured shift, or whatever—and then it'll abruptly become useless. Eventually someone will notice and fix it, but not until after a few costly, high-profile failures occur. With humans, changes will be adapted to much faster—at least given the current state of AIs.
posted by Anticipation Of A New Lover's Arrival, The at 7:02 PM on October 26, 2018 [2 favorites]


The AI is just doing the analysis humans told it to do. It should be used for proofing.
posted by Ironmouth at 7:04 PM on October 26, 2018


The computers will only get better at this, and will be used more often. If you hire a lawyer to review a legal document, would you want them to use the slow and inaccurate method, or the fast and more accurate method?
posted by Triplanetary at 9:07 PM on October 26, 2018


A fancy highlighter pen, if you ask me.
posted by rhizome at 9:48 PM on October 26, 2018


Didn't they used to call algorithm-based systems with a certain amount of industry specific knowledge like these by the moniker "expert systems" ?

Yes, but that was a term of art for a very rules-based approach. If you read a bunch of medical textbooks and wrote a list of questions for a computer to ask patients, and then the programmed in diagnoses based on answers and disease rates, that would be an "expert system." Incidentally people have done this and it outperforms doctors, but that's a different thread.

Feeding contracts to a computer and asking it to "learn" what a good contract is wouldn't be called an expert system because it's a different technique, even if it produces a system that's an expert.
posted by mark k at 8:24 AM on October 27, 2018 [1 favorite]


And it's going to get an "Amazon HR AI" (replace "showed" with "revealed") problem: it tells you what has been considered to be a good contract according to the limited source material and the people who designed the rubric. They're different because the Amazon HR thing was trying to increase inclusiveness and the contract filterer is trying to decrease it (make it as specific as possible), but the bias in the inputs and in the rulemaking is inescapable, Death of the Author be damned.
posted by rhizome at 11:07 AM on October 27, 2018


What if the AI can read your draft pleadings or motion and suggest cases that you haven't cited but probably should based on the content of your draft and the other cases you cited? That's what Casetext's CARA does.

I use CARA for this and really like it. The current generation of "legal tech" is like night and day compared to the kind of captive practice management crapware we used to get (and that stuff is still mostly crappy).

I've been daydreaming about learning enough programming to go work in this area instead of practicing, but I'd think it's both hard to get into and saturated with more qualified people who have already realized they would rather work at a cool statup than one more law office.

I look forward to the day when somebody introduces a Little Bobby Tables language into a contract, and pwns LawGeex artificial intelligence.

Some contracts are arguably already written in Brainfuck. /rimshot

I don't see LSTM text-generation RNNs writing client memos anytime soon :)

Some access to justice apps go so far as to generate scripts to be read in court by a layperson.
posted by snuffleupagus at 4:51 PM on October 27, 2018


I should add, what CARA is maybe even better for is digesting opposing motions. In terms of the advantage that provides versus working without it. Westlaw and Lexis at least try to surface related authority, but do it based on the cases I'm pulling already.

When it comes to contracts, determining whether or not this or that standard provision is present is a trivial task. In-house paralegals can do it. Frankly, an average-intelligence college undergrad (who's familiar with the type of contract) can do it.

And they do -- plug "contract analyst," "contract manager" or "contract coordinator" into Glassdoor or Indeed.
posted by snuffleupagus at 4:59 PM on October 27, 2018 [1 favorite]


Much of legal AI is about augmenting and accelerating the human lawyer and helping them not make mistakes or overlook details.

I work for an IT automation company. Our founder was very fond of the phrase “we’re helping build Iron Man, not robots.”
posted by mph at 6:38 PM on October 27, 2018


How many people will one iron man replace?
posted by caddis at 7:17 PM on October 27, 2018


« Older Dogs romping to "Rompo i Lacci"   |   The Inventor of Green Bean Casserole Has Passed On Newer »


This thread has been archived and is closed to new comments