Artificial Confusion: The (Overblown) Threat Of Artificial Intelligence

The reality is that we are many years away from the rise of artificial superintelligence, especially in the legal industry.

Elon Musk recently warned that an international artificial intelligence race is more likely to cause World War III than a 20th Century-style arms race. We lawyers have our own “end-of-the-world” concern: that AI will ultimately end most lawyers’ careers. But before we start speculating about what havoc HAL 9000, J.D. could wreak on the legal profession, we should keep in mind a key distinction between two different forms of artificial intelligence: what is known as “weak AI” and “artificial superintelligence.”

The latter is the flavor of AI you are most likely to encounter in a sci-fi movie; that is, an artificially intelligent machine that has the capacity to learn and master a diversity of topics, much like a human can, but at a rate that far surpasses human ability. You may be picturing a robot that can do everything a lawyer does, in a fraction of the time, and at a fraction of the cost — every lawyer and managing partner’s worst nightmare. And indeed, the application of artificial superintelligence to the profession would logically end with robots taking over, and it would make lawyers replaceable.

The reality is that we are many years away from that happening, especially in the legal industry. The AI applications we see today, from driverless cars to image recognition, are “weak” AI. This is the same sort of technology on display in Google’s AlphaGo, an AI program that beat the world champion of the game Go. “Weak” AI systems can process large sets of information, but are only able to do a single task really well (meaning that your self-driving car won’t teach itself how to beat you at Go).

Realistically, only services powered by “weak” AI, rather than artificial superintelligence, will be deployed in the legal profession in our lifetimes. It will be several more decades before AI can “learn” how to synthesize diverse skillsets. As attorneys, we deeply analyze complex subjective issues, we make strategic decisions, and we hone our power of persuasion. That’s why our clients turn to us. Teaching a machine any one of these skills will be a huge challenge; teaching a single machine to apply all of these skills simultaneously is immeasurably more difficult.

This doesn’t mean that today’s “weak” AI isn’t actually extremely powerful. Even though current AI technologies will not take the place of a litigator’s strategic judgment, they’ve already fueled game-changing innovations in the legal market. Like self-driving cars, AI-backed legal technology has the power to automate certain key tasks which we previously thought had to be done by humans, giving users more time to focus on the tasks that do require complex analysis and creative thinking.

Take Kira Systems, a contract analysis platform. Through natural language processing and machine learning, Kira’s AI can process thousands of examples of contract language relevant to the user, providing valuable insights into those contracts. Casetext’s CARA, modern legal research technology, can analyze a document (like an opponent’s brief), and identify the cases and previously filed legal briefs most relevant to the issues in the brief. Both forms of AI save lawyers time on the more mechanical tasks of legal practice, which ultimately leads to more satisfied clients, less time spent on non-billable work, and, very often, more business.

Legal practice often boils down to pattern recognition, and even “weak” AI can uncover patterns that humans aren’t capable of recognizing. For example, the old legal research model presumed that you needed a massive human effort to summarize cases to provide the information litigators need. But new technology applications can find the patterns already present deep in the law. For instance, it turns out that when judges summarize another case in an opinion, they often repeat similar language. Casetext uses those patterns to uncover judge-written summaries of judicial opinions, which most attorneys trust more than any other source. Legal professionals can look forward to even wider-reaching innovations based on AI systems’ capacity for machine learning and natural language processing that will help attorneys do their best work.  

Sponsored

The truth is, though, that even though AI can uncover insights about legal writing, there are decades of work to be done before AI can craft arguments at the depth of a human. It’s going to be a while before robots are arguing cases at the Supreme Court. Until then, we can dedicate the time saved by using AI tools to thinking about how to avoid WWIII.  


Jake Heller is the Founder and CEO of Casetext. Before starting Casetext, Jake was a litigator at Ropes & Gray. He’s a Silicon Valley native, and has been programming since childhood. For more information about CARA, Casetext’s AI-backed legal research assistant, visit info.casetext.com/cara-ai.

Sponsored

CRM Banner