When Is AI Good Enough?

If we can learn to embrace AI’s potential, it can serve as a powerful tool to help us save resources and deliver better service to our organization or clients.

Artificial intelligence making possible new computer technologieOne of the common objections to artificial intelligence or machine learning is that a computer will inevitably make mistakes that humans wouldn’t make. On the surface, it seems like a good argument. “I’d never make that mistake!” But does that logic hold up?

Trusting An Algorithm Doesn’t Always Feel Right, But Maybe It Should.

Imagine a scenario in which you’re a doctor in an ER, and a patient comes in with chest pains. What do you do? Not admitting the individual could cost someone their life, but admitting a patient who isn’t in danger could take up precious hospital resources that could take away from others. Nobody is perfect, but a wrong decision in a case like this can impact lives.

In Malcolm Gladwell’s book “Blink: The Power of Thinking without Thinking” Gladwell explored the difference between trusting a simple algorithm with decisions such as these, versus trusting the expertise of doctors who have diagnosed heart attacks throughout their career. Regardless of Gladwell’s views on remote work (for which he is currently under scrutiny), his observations in “Blink” bring to light some interesting points about artificial intelligence.

Gladwell and Brendan Reilly at Cook County Hospital used a basic researcher-developed rubric that had never been tested. The algorithm was simple, combining the results of an electrocardiogram (ECG) with three simple questions:

  1. Is the pain felt by the patient unstable angina?
  2. Is there fluid in the patient’s lungs?
  3. Is the patient’s systolic blood pressure below 100?

The results were fascinating. The algorithm was 70% better at recognizing patients who weren’t having heart attacks — saving significant hospital resources — and it was right more than 95% of the time for the most serious patients. By comparison, the doctors were right between 75% and 89% of the time.

Sponsored

An argument can be made that the algorithmic approach could be wrong when a well-trained doctor using their judgment would not have been wrong; but in this case, a simple algorithm delivered better outcomes on the whole.

The primary reason was because there are a number of factors that would appear to be important, but they aren’t when deciding to admit a potential heart attack patient. A family history of cardiovascular issues, prior heart attacks, lifestyle factors, and one’s gender or race do not really matter in the moment someone is sitting in the emergency room with chest pains.

There are several learnings from this story that can be applied directly to the use of AI, robotic process automation, and machine learning (ML). These observations are also directly applicable to their use in legal technology applications.

Misconception: Even The Best Machines Will Make Mistakes That Humans Won’t

As humans, we tend to start with a bias that our current solution is the standard and that other solutions must be compared against our current solution. Our bias presumes the new solution must do everything that current solutions do. In some cases there are non-negotiable items. But when a new solution doesn’t meet the standard of our current solution, we tend to focus on that failure versus the overall outcome. It’s the same with AI; but if the overall result is better, the maybe we should just trust the algorithm.

Sponsored

Simple Can Be Better

Humans tend to overthink problems, especially when a group works together on them. A group will have multiple views, which can help provide different ways to solve it. Where group dynamics can go wrong has to do in the solutioning; we can over-engineer a problem to satisfy all of the approaches that a group comes up with.

The emergency room heart attack algorithm could have had 15 steps and technically have been better. Even if you eliminate data points that look like they are relevant (e.g., family history) but aren’t actually relevant, a more complex algorithm may have defeated the purpose.

In the emergency room example, time is of the essence. A more complex algorithm could have taken longer to administer, or all the data points might not have been available, causing troubleshooting challenges.

It’s best to start by keeping things simple, then improve from there.

Focus On Outcomes

An algorithm facilitates an outcome. As humans, we don’t always see the big picture. In the emergency room example, we might think it is better to err on the side of caution and admit patients that might be having a heart attack. That makes sense. But is it truly a better outcome for the hospital when there are a limited number of beds and a three-hour backlog waiting for triage? Probably not.

Focusing on the big picture and overall outcomes can help inform the benefit of AI.

In Legal Tech, Understand What AI Can Do That Humans Can’t

If we can learn to embrace AI’s potential, it can serve as a powerful tool to help us save resources and deliver better service to our organization or clients. Contract lifecycle management (CLM), for example, is all the rage right now. Tracking and cataloguing contracts with key data elements like term, special clauses, and limitations of liability can have a great impact, and using AI to tag data elements can be very effective. Let’s say you’re given a project to centralize and analyze all customer contracts in a law department. Here are some considerations to keep in mind during the process.

We need to be able to trust an AI algorithm even if it isn’t perfect. Manual tagging with humans is an inconsistent process. Tagging may be done differently by different people and employee turnover can cause a system dependent upon human process to decay in effectiveness. We need to trust the algorithm and accept that a consistent, machine-driven process can be good enough.

Starting small can help us build a better process. Is it really a requirement to extract and interpret the meaning of special clauses in contracts? Or is OK to just flag special clauses in a contract so a report can be run that highlights which contracts may require research?

We should focus on outcomes. It’s important to understand the goals of the organization and ensure the reasons contracts are being tagged in the first place. Is the real problem a lack of visibility into the universe of contracts? Or is the primary goal to ensure a faster and easier renewal process so your sales team can preserve revenue and have more time to focus on finding new customers? Understanding the broader goals can inform the success or failure of a CLM project, and it may not hinge on imperfect tagging of a specific data element by AI.

Like the doctors at Cook County Hospital, law firms and legal professionals should begin to learn to “trust the algorithm” and embrace AI across more applications where it can be useful.


Ken Crutchfield HeadshotKen Crutchfield is Vice President and General Manager of Legal Markets at Wolters Kluwer Legal & Regulatory U.S., a leading provider of information, business intelligence, regulatory and legal workflow solutions. Ken has more than three decades of experience as a leader in information and software solutions across industries. He can be reached at ken.crutchfield@wolterskluwer.com.

CRM Banner