Socio-economy & New Tech

    Artificial Intelligence

Joint Research Initiative

France

Achieving interpretability for big data and machine-learning systems

If machine learning systems are to be entrusted with taking critical decisions concerning human lives, humans need to be able to understand how. For instance, in a medical setting, a model that diagnoses diseases for patients must be able to explain the reason for its diagnostic. Very recently, this issue has received attention from the European Parliament whose General Data Protection Regulation opens the way for each citizen to receive an explanation for algorithmic decisions. But rendering the inner structure of complex systems readable is a challenging task. In a Joint Research Initiative, Prof. Christophe Marsala from the Computer Lab at Sorbonne Université and the Data Innovation Lab at AXA aim to provide a better understanding of the different facets of interpretability in a data science context, focusing on algorithms meant for classification tasks. The objective is to provide the basis to conceive and to build a new generation of big data and machine-learning systems offering a human-friendly feature.
"The recent controversy in France concerning the opacity of the APB process – an algorithm that assigns students to universities –, is a good example of the need for interpretability when it comes to machine-learning systems", offers Prof. Christophe Marsala. "French students are asking to know how the algorithm sorts through the candidates when the university course they are asking for is saturated. Students are complaining because they don't understand the decision." "In the current context of big data and data science challenges, it appears that, if it is essential to build reliable and efficient systems, it is also crucial to offer interpretable systems and interpretable decisions", the researcher points out.

From machine learning black boxes to interpretable insights
The motivation for this project comes from the observation that defining interpretability is in itself a difficult task to solve and that it constitutes a challenge."Interpretability is concerned with the inner structure of the studied system that can for instance be a mathematical function, a set of rules or a decision tree to name a few. It refers to its validity, its readability, its intuitive coherence or its outputs", Prof. Marsala explains. "Moreover, interpretability is also a highly subjective concept as it depends on whom it is intended for. The interpretation of a model is directly related to its recipient, his expectations and his knowledge" Whether you are the end-user, as is the case with French students and the APB process, a domain expert or a data scientist for instance, your conception of what an interpretable system should look like i completely different.
The AXA Joint Research Initiative intends to take a global view of interpretability by taking into account these different perspectives. More specifically, the project proposes to distinguish between four types of interpretability, reflecting the complexity of the inner structure of such systems and the various expectations. To construct all four interpretability definitions, a cross-disciplinary approach will be applied, in particular combining computer with cognitive sciences. The next step will be to explore methods that intrinsically offer interpretability properties to either be added to existing systems or incorporated in future ones.


The project directly addresses current challenges posed by digital transformation, especially when it comes to big data technologies and embedding Machine Learning in the operational environments. This aspect is of particular importance to AXA and to the insurance sector more generally. Interpretability will be crucial to guarantee customer trust for instance, or to provide regulatory organs with the proof that legal requirements are respected. "The proposed approaches will allow to move from machine learning black boxes to interpretable insights, opening up the possibility to understand and even change decision-makers’ actions", summarises Prof. Marsala. The findings should provide solid foundations for the creation or adaptation of current algorithms to obtain human-friendly versions.

Christophe
MARSALA

Institution

Sorbonne University

Country

France

Nationality

French

ORCID Open Researcher and Contributor ID, a unique and persistent identifier to researchers