Thomson Reuters Launches CoCounsel 2.0

New release promises results three times faster than the last version.

cocounselIt seems like just last year we were talking about CoCounsel 1.0, the generative AI product launched by Casetext and then swiftly acquired by Thomson Reuters. That’s because it was just last year. Since then, Thomson Reuters has worked to marry Casetext’s tool with TR’s treasure trove of data.

It’s not an easy task. A lot of the legal AI conversation glosses over how constructing these tools requires a radical confrontation with the lawyers’ mind. Why do attorneys do what they do every day? Are there seemingly “inefficient” steps that actually serve a purpose? Does an AI “answer” advance the workflow or hinder the research alchemy? As recently as April, Thomson Reuters was busy hyping the fruits of its efforts to get ahead of these challenges.

But as it turns out, they were already preparing to move on from CoCounsel 1.0. As we kick off the International Legal Technology Association annual convention, Thomson Reuters opened the show announcing an all-new CoCounsel release.

Armed with the experience gained over the last year and a melange of multiple LLMs under the hood, TR promises the new CoCounsel will operate more intuitively, being better able to understand the way lawyers communicate and work, and deliver better results with enhanced ability to interpret documents. It also, of course, includes the two recently announced Co-Counsel features.

CoCounsel 2.0 will also bring additional and upgraded capabilities for legal professionals. The just-launched Claims Explorer in Westlaw Precision with CoCounsel simplifies claims research by enabling legal professionals to enter facts and identify applicable claims or counterclaims. CoCounsel Drafting, the new, end-to- end GenAI-enabled solution from Thomson Reuters, accelerates drafting by as much as 50%.

And for the biggest volume customers, the press release promises access to its high throughput capabilities.

Offer CoCounsel High Throughput Beta, for teams needing to automate the review of hundreds of thousands or even millions of documents, with human- level accuracy. This capability has been successfully deployed on an as- needed basis and will now be available to all CoCounsel users.

According to the press release, CoCounsel 2.0 runs three times faster than the already swift prior version.

When I saw the very first iteration of CoCounsel, what struck me was its commitment to telling the user what it didn’t know. Generative AI desperately wants to give an answer even if it’s got nothing useful to say, and CoCounsel had a better “sorry, we can’t answer that based on the knowledge at hand.” That commitment carried over to the TR product, with Raghu Ramanathan, president of TR’s Legal Professionals segment telling me that TR’s AI team leans on confidence scores — even adopting negative scores in some contexts — to ensure that the customer isn’t duped by output getting out ahead of its skiis.

But while safety comes first, the goal is always building a tool that can decipher the user’s prompt and deliver an actionable answer if there’s one to be had. That’s where the work being done to create a balanced blend of LLMs comes into play.

The company uses multiple LLMs in CoCounsel 2.0, leveraging the strengths of each model to deliver highest accuracy and performance. To ensure the latest GenAI advantages are available to Thomson Reuters customers, the company evaluates every new LLM release and upgrade. In CoCounsel 2.0, Thomson Reuters is testing adding Google Gemini 1.5 Pro to its production models suite. This updated models suite allows CoCounsel to take advantage of a substantially longer “context window,” unlocking new capabilities, increasing processing throughput, and improving the ability of CoCounsel to analyze complex patterns in legal documents. Testing shows that a blend of multiple LLMs to power CoCounsel 2.0 delivers optimal accuracy and user experience.

On last week’s Legal Tech Journalists’ Roundtable, I went on an extended analogy comparing legal AI to auto racing. For instance, every F1 team builds a different car, but they don’t all build their own engines. So while Ferrari and Haas run wildly different cars, they both sport Ferrari engines. Generative AI products — at least in the legal sector — are like the car manufacturers dependent on someone like OpenAI to build the guts. But how that underlying model is put to work or, as in this case, how much the vendor blends competing LLMs to their bespoke specifications represents the real difference between everyone in this space.

Which could well be where legal AI providers break away from the hype cycle pack. Not everyone out there has the resources or technical power to take all these tools and build a Frankenstein’s hot rod of an algorithm. Those that find a way to get the most out of their AI medley are going to have the advantage.

And for all the high tech buzzword-driven conversation, that’s a fundamentally human question. Who are you trusting to be the architects of the right mix of technology? Whose engineering team do you think has cracked this rapidly evolving case? It’s going to be an interesting few years.

Earlier: Westlaw AI Launch Forces Confrontation With The Inner Workings Of A Lawyer’s Mind
Is That A Professional-Grade, Legal GenAI Assistant In Your Pocket Or Are You Just Happy To See Me?
Legal AI Knows What It Doesn’t Know Which Makes It Most Intelligent Artificial Intelligence Of All


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

CRM Banner