Legal professionals’ perspectives on generative AI runs the gamut from concerns about threats to jobs and how often the technology “hallucinates” to excitement to apply it to legal work. These are among the findings of the ChatGPT & Generative AI within Law Firms report, released last month by Thomson Reuters.

Based on surveys conducted of law firms in the United States, UK, and Canada, the report found that legal professionals are aware of ChatGPT and generative AI yet uncertain about applying the technology. The report has earned attention on both sides of the pond, and Legal Current is sharing journalists and industry influencers’ takes on the findings.

Blogger Nicole Black supports legal professionals integrating new technologies. In coverage of the report in Above the Law, Black explained, “By embracing these technologies, you’ll be better positioned to keep up with the rapidly evolving legal technology landscape. … The potential applications of Generative AI in the legal industry are many, and the technology is already making an impact. Presently, two of the most prevalent methods of incorporating Generative AI into legal workflows include utilizing it as a virtual legal assistant and streamlining contract review.”

Writer Joanna Goodman described how legal professionals are sorting through the hype around generative AI and focusing on practical applications of it, “finding new ways to limit potential risks.”

In Goodman’s coverage of the report in Law Society Gazette, she called out the importance of recognizing the data challenges with the technology: “While it is unlikely that anybody would provide GPT-generated legal documentation or advice without checking it, a combination of AI and human error could potentially lead to serious real-world consequences.”

Client confidentiality was another area that gave report respondents pause, as Global Legal Post Editor Victoria Basham noted.

Her coverage of the report pointed out: “Another common concern among respondents centred on the data needed for the system to function, particularly if that included private client data. One noted concern about the confidentiality of source material used to generate AI output, while another took issue with how the data would ultimately be used, citing the need to ensure adequate guardrails, such that the AI is not learning incorrect or inappropriate behaviours.”

Blogger Steve Embry recommended lawyers take a common-sense approach to addressing their concerns with the technology.

“If you don’t want to breach client confidence by using generative AI, then don’t type anything confidential in what you ask generative AI to do,” Embry said in summary of the report in TechLaw Crossroads. “The idea we have to protect client confidences doesn’t outright preclude using technology tools. The rules just require lawyers to make sure the confidences are protected.”

Embry wasn’t surprised by the report finding that only 51% said ChatGPT and generative AI should be applied to legal work. He said, “If a recent Thomson Reuters Report is any indication, lawyers and law firms plan to approach generative AI like they do most technology. Slowly and with skepticism. Granted, it’s still early for a profession that is notoriously slow in adopting technology and innovating anything.”

Legal Cheek Editor Thomas Connelly also highlighted lawyers’ hesitancy around the technology in his coverage of the report: “The vast majority of lawyers recognise AI’s ability to undertake legal work, new research has found, but many feel the profession is better off keeping them separate.”

For more insight on legal professionals’ attitudes toward ChatGPT and generative AI, read additional reactions to the report findings and download the full report.

Please follow and like us:
Pin Share