Artificial Intelligence
AXA Awards
United States
Exploring the true added value of artificial intelligence in the legal system
A cautionary analysis, with an optimistic twist
The first step of the project will be to identify the values embedded in legal procedure, Dr. Calo explains. The second involves an effort to assess which of these values should be reproduced in the context of AI. Indeed, another aspect Dr. Calo and his collaborators, Danielle Keats Citron (University of Maryland), and Andrea Simoncini (University of Florence), are working on is the actual added value of these AI tools. "With Danielle and Andrea, we are asking very simple questions: What is it that courts are trying to do? What are these machines good at, what is it that they do better than us?" They are arguing that maybe machines should not exercise certain types of judgment, but instead stick to issues where we know they will not deny people benefits, but instead, improve the efficiency of the court and improve access to justice. "Our approach is cautionary, of course, but also helpful, because we’re trying to show that these technologies offer great opportunities", Dr. Calo insists. "We ought to be looking at this powerful set of tools as an invitation to accomplish our goals better, and not as replacement for everything humans used to do. Simple things like real-time translation for people who don’t speak the language used in the court would be very useful, for example. More elaborated systems, such as risk-assessment algorithms, come from a good place, but they come with a host of problems, not the least of which is transparency". Indeed, algorithms are developed by private businesses, which often means the algorithm is "black boxed", meaning only the owners and developers really know how the software makes decisions.
This much is sure, asking whether judges or lawyers can be replaced by machines isn’t asking the right question. In putting together this project, Dr. Calo and his collaborators are taking a proactive, while realistic approach. In addition to the one or more frameworks that will be developed for the design of procedurally sufficient AI-aided decision-making, the project aims to generate proofs-of-concept, i.e., one or more models of actual systems co-designed by legal and technical experts. The ultimate objective will be to disseminate this output among policymakers and stakeholders, including academics, judges and other government officials, and industry.
Ryan
CALO
Institution
Exploring the true added value of artificial intelligence in the legal system
Country
United States
Nationality
American
Related articles
Artificial Intelligence
AXA Chair
United Kingdom
Explainable AI for healthcare: enabling a revolution
Developing technologies that we can trust: a new paradigm for AI As these limitations have become increasingly apparent, AI experts... Read more
Thomas
LUKASIEWICZ
University of Oxford
Artificial Intelligence
Joint Research Initiative
Belgium
Fairness in AI: ending bias and discrimination
Garbage in, garbage out: how to ensure fairness-aware machine learning? When measuring fairness, a natural preliminary question to ask is... Read more
Toon
CALDERS
University of Antwerp
Artificial Intelligence
Joint Research Initiative
Belgium
Fulfilling the potential of AI: towards explainable deep learning
“In its approach of explainable AI, the project will investigate the use of instance-based explanations (explaining the model for one... Read more
David
MARTENS