Mike Dahn: How Harmful Are Errors in AI Research Results?
AI and large language models have proven to be powerful tools for legal professionals. Our customers are seeing the gains in efficiency and tell us it’s greatly beneficial. However, there has been a lot of discussion lately of errors and hallucinations, but what hasn’t been discussed is the extent of harm that comes from errors or the benefits of answers with an error.
First, let’s settle on terminology. We should use terms like “errors” or “inaccuracies” instead of “hallucinations.” “Hallucination” sounds smart, like we’re AI insiders and know the lingo, but the term is often defined narrowly as a fabrication, which is just one type of error. Customers will be as concerned, if not more concerned, about non-fabricated statements from non-fabricated cases that, despite being real, are still incorrect for the question. “Errors” or “inaccuracies” are much better and more encompassing ways to describe the full range of problems we care about.
Visit the Thomson Reuters Innovation Blog to read the full post from Mike Dahn, head of Westlaw Product Management, Thomson Reuters.