Thomson Reuters recently hosted a Legal Geek takeover, a half day of virtual sessions on legal innovation, technology and trends. A highlight of the event was the AI Ethics session hosted by a Thomson Reuters teamAndrew Fletcher, director of TR Labs; Milda Norkute, senior designer; and Nadja Herger, data scientist along with Eric Wood, partner, Chapman and Cutler LLP.

Their session explored the ethical developments and adoption of AI. Legal Current had an opportunity to catch up with the team after the Legal Geek Takeover, and they shared insights from their session. Below is a recap of the conversation. 

Legal Current: There are several applications of AI in the legal domain. Which are you most excited about and why?

Wood
: I am most excited about the next generation of AI-powered contract analysis applications. While current tools are primarily limited to identifying provisions or clauses in contracts, I am hopeful that in the near future, advancements in AI will enable these tools to provide higher-level functionality, such as automatically revising contracts or accurately assessing compliance with specific contract guidelines.

LC: We see AI ethics in the headlines daily, often addressing the potential for AI to produce biased or inaccurate outcomes. What ethics-related limitations should we be aware of?

Norkute: Ethical implications around AI can arise at any stage of a typical AI product lifecycle. AI can fail, and it does so in sometimes unexpected ways. For AI to have the largest impact, it’s important to be aware of its limitations, including potential biases, as well as its intended use. Some of the aspects to consider include: What are the regulatory requirements in the specific domain? What is the impact when AI makes a mistake? How was the training data collected? Is there bias in the training data?

We have to keep in mind that AI solutions are typically only as good as the data being used to train the AI. It’s also important to monitor the AI solution once it’s been deployed to active use because it’s possible it will see new data that wasn’t part of the training and start being less accurate than when it was trained. The context of where the AI solutions are used, and data input and output, are key.

LC: How are companies trying to stay ahead of proposed AI regulations or legislation?

Fletcher: Our approach at Thomson Reuters is guided by our AI principles, which promote trustworthiness in our continuous design, development and deployment of AI. We follow five principles, which will evolve as the field of AI and its applications matures:

  1. That Thomson Reuters will prioritize safety, security, and privacy throughout the design, development and deployment of our AI products and services.
  2. That Thomson Reuters will strive to maintain a human-centric approach and will strive to design, develop and deploy AI products and services that treat people fairly.
  3. That Thomson Reuters aims to design, develop and deploy AI products and services that are reliable and that help empower people to make efficient, informed, and socially beneficial decisions.
  4. That Thomson Reuters will maintain appropriate accountability measures for our AI products and services.
  5. That Thomson Reuters will implement practices intended to make the use of AI in our products and services explainable.

LC: What’s the one thing about AI ethics that legal professionals should know?

Herger: AI solutions can help legal professionals improve productivity and provide better legal services to their clients. However, it’s important to be aware of the limitations and intended use of AI solutions. It’s also important for humans to remain involved to ensure that we’re maintaining ethical use and development of these tools, as well as including true domain expertise.

Check out Legal Geek Takeover session highlights on Legal Current, and see more session insights on Thomson Reuters Institute.

 

Please follow and like us:
Pin Share