When AI Kills

Reflections on the recent Uber tragedy, and what it means for AI applications to legal practice.

Sunday, March 18, 2018, was the first day that artificial intelligence killed someone.

A 49-year-old woman, Elaine Herzberg, was walking her bike in Tempe, Arizona, when she was struck and killed by an autonomous Uber vehicle.

As the leader of an artificial intelligence startup, this tragic event gave me a lot to reflect on. Like Uber, we create artificial intelligence technology. And like Uber, people rely on that technology to perform well consistently. Of course, using Casetext’s CARA AI cannot literally kill anyone. But it is used actively by, among others, criminal defense and death penalty attorneys, as well as attorneys at some of the world’s largest law firms litigating billion-dollar lawsuits. The stakes are real.

To me, the Uber tragedy is a reminder of a few truths about artificial intelligence that relate to its application to legal practice. First, we aren’t ready for true “driverless” AI in law. All of the artificial intelligence applications to legal practice — whether it be Casetext’s CARA AI for legal research, Kira Systems for due diligence, or LawGeex for contract review — give results that are meant to make an attorney’s task more efficient, or provide insights that would otherwise not be found. This is appropriate since, despite the marketing hype and sci-fi fears, artificial intelligence technology is no replacement for human judgment.

Second, there will be mistakes and accidents — sometimes severely tragic — even where humans are there to “drive” AI. Indeed, there was a “safety driver” operating the Uber car, even though the car was operating in autonomous mode. Both humans and technology are imperfect. One hopes that the two working in tandem will fill in for each other’s weaknesses in that, where a human is about to make a mistake, an automated process can guide them in the right direction, and vice versa.

For example, Casetext’s technology is most often used by attorneys to check their own brief to see if there are cases they missed but should have included, as well as by opposing counsel and judges to run the same test, but with different motivations — a classic example of advanced technology providing a second pair of eyes. But humans working with technology cannot eliminate all mistakes and accidents. All we can do is work continuously until we get closer and closer to perfection.

Finally, we need to determine how to process when artificial intelligence technology causes errors, sometimes tragic ones. Elie Mystal and a supermajority of ATL readers believe Uber should be held liable. But legal fault and liability aside, I believe it is important that we evaluate whether new technology represents a step backwards or forwards from the status quo. Understanding that error is inherent for both technology and humans alike, the most important question in judging that technology is whether it is increasing or reducing those errors.

In the case of Uber, based on the information made public to date, it is impossible to know whether a non-autonomous vehicle, piloted by a regular human driver, would have avoided this tragic collision. We’ll learn more as time goes on, but at least for now, according to Tempe’s chief of police, “[i]t’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway.”

That said, within legal technology, it’s more or less incontrovertible; LawGeex recently released a study showing their product far less error-prone than human reviewers for the task of parsing out information in non-disclosure agreements.

As artificial intelligence technology becomes more ingrained in our everyday lives, we’ll see enormous progress and fundamental shifts, including in the way that law is practiced. But that technological progress will not replace an attorney’s judgment — and it will not be error-free — though it will likely improve the practice and greatly reduce the number of mistakes overall. We need to keep these truths in mind as this technology becomes an increasing part of law practice in the years to come.

(These are just my initial, raw reactions in a conversation that will likely be raging for years. I’m curious if you have different perspectives — tweet me at @Jacob_Heller.)


Jake Heller is the co-founder and CEO of Casetext. Before starting Casetext, Jake was a litigator at Ropes & Gray. He’s a Silicon Valley native, and has been programming since childhood. For more information about CARA, Casetext’s AI-backed legal research assistant, visit info.casetext.com/cara-ai.