Mike Dahn, head of Westlaw Product Management for Thomson Reuters, and Joel Hron, head of AI and Thomson Reuters Labs, recently joined Bob Ambrogi for a conversation on the company’s generative AI strategy. Legal Current listened in to share highlights from their podcast. 

They started with the impact of last year’s launch of AI-Assisted Research on Westlaw Precision 

“The most common phrase I hear from customers is game changer,” Dahn said. “They often talk about saving hours of time, so it’s a really big deal. We’ve had corporate counsel say, ‘Outside counsel would have charged me $2,500 for the answer I just got.’” 

He said it’s been a transformational change. 

Explosion of Large Language Models 

Photo credit: LawNext

“With the explosion of large language models (LLMs), we really have seen a big explosion in both adoption and capabilities in the last year,” Dahn said. “We’re now able to understand language patterns far better than we’ve ever been able to before. And that’s incredibly useful in legal research.” 

He explained how the complexities of legal research are further complicated by the nuances of the law. 

“There’s so many ways for courts and for legislators to describe not only the law but also the facts that are relevant to the law,” he said. “And lawyers are often looking for very specific facts. They want the cases that line up most closely with their client situation. And they’re looking for the most compelling arguments that they can make and the most compelling ways that they can phrase those arguments.” 

Hron agreed that LLMs are helping legal professionals conduct research more quickly and effectively.  

“They’re able to navigate through this information in a much more streamlined way and uncover, ultimately I think, more of the truth then they would have been able to previously,” Hron said. 

While generative AI makes legal research more efficient, Hron and Dahn emphasized that Thomson Reuters views AI-assisted research as supplemental to traditional research. 

“We’re very clear with our customers that this should be used to dramatically accelerate thorough research, but it should not be used as a replacement for thorough research,” Dahn said. “There’s still nuance that the lawyer might need after getting an answer back in AI-assisted research – and they should do additional research.” 

Democratizing Access to AI  

Beyond strengthening legal research, Hron highlighted how generative AI is democratizing access to AI. 

Before these models, it required a degree of specialization from an engineer or scientist to be able to get in and understand them and work with them and optimize them,” Hron said. “And now it really is as easy as typing a sentence into a prompt box. That mode of interaction and that ease of access has allowed us to really shift left on who is able to engage in the development of AI.” 

Hron said this democratization has helped fuel the Thomson Reuters Generative AI Platform – enabling more ideation in a tangible way and further enabling the technology to be used across the organization securely.  

Hron and Dahn also discussed efforts to expand generative AI enhancements across lawyers’ workflows, beyond legal research. Dahn noted how capabilities including drafting and skills – similar to having an AI assistant “at your side” – will be driven by generative AI technology as well as human expertise.  

“Having our attorney editors working on the things that AI cannot do well yet and also helping to train the AI – that combination of our editorial excellence combined with the very latest in AI is going to help us to do dramatically more for lawyers than other companies,” Dahn said. 

Build, Buy, and Partner Strategy 

Ambrogi asked how the Thomson Reuters build, buy, and partner strategy, including last year’s Casetext acquisition, shapes the company’s approach to AI.  

Our thinking on this is we want to do everything possible for our customers and that this technology is extraordinarily exciting,” Dahn said. “So we’re doing all of those things with AI to make sure that we’re bringing as much of the capability to our customers as fast as we can.” 

CoCounsel really had a unique vision and approach to this broadly applicable assistant concept and how to develop these skills across a particular discipline quickly and efficiently and also with quality at the same time,” Hron added, noting the value it brings will extend across the Thomson Reuters product portfolio.  

Prioritizing Accuracy  

Hron and Dahn also discussed hallucinations and why Thomson Reuters prioritizes accuracy of generative AI outputs and results. Hron described using a technique called Retrieval Augmented Generation (RAG) to help reduce errors by grounding LLMs with reliable content. 

Dahn added that the Thomson Reuters legal solutions are backed by more than a century of editorial enhancements, including Key Numbers, headnotes, proprietary indices and KeyCite, among others. These enhancements are maintained by a large editorial staff marking up case law to help get the right material to the language model. 

“It really helps in manual research, but it also helps a lot in our RAG process to provide an AI-assisted research capability that produces accurate results,” Dahn said. “We’ve got a whole army of attorney editors that are focused on the law and helping us build the right connections for both customers and for the AI. What we can do with AI is substantially better than what other companies can do because of those editorial assets and those actual editors.”   

Their predictions for the future of AI included a substantial improvement in workflows – for lawyers as well as professionals in other industries as the technology becomes more widespread. 

“We’re going to see dramatic change in lawyer capability, but I don’t think we’re going to see massive layoffs in law firms because of AI,” Dahn said. “It’s lawyers who use AI that are going to replace lawyers [who don’t use AI].”  

Listen to the full podcast for more of Dahn and Hron’s AI insights. 

Please follow and like us:
Pin Share