Socio-economy & New Tech

    Artificial Intelligence

Joint Research Initiative


Fulfilling the potential of AI: towards explainable deep learning

The latest advances in artificial intelligence have proven tremendously successful at various complex tasks. Deep learning (a subset of machine learning), in particular, offers the promise of transformative breakthroughs for a wide range of industries, from automated cars to medical research and risk management. However, the opacity of these sophisticated algorithms, often described as "black boxes", is hindering their application in the real world. Given the increasingly important role machine-learning plays in the Insurance industry, professor David Martens, an expert in AI at the University of Antwerp, has undertaken an ambitious 3-year collaborative project with practitioners from AXA Belgium aimed at explaining the decisions made by such systems. Specifically, the aim of the Joint Research Initiative, called Explainable Artificial Intelligence (ExAI), is to develop new algorithms to explain complicated AI models, both in terms of global insight and in the case of instance-based decisions. The results will be validated in several real case applications, notably within the framework of AXA.
To achieve artificial intelligence, operators and programmers of AI are developing increasingly complex algorithms. In the case of deep learning, these algorithms are inspired by the structure of the human brain with millions of neurons that are connected in many layers. This structure is called the "artificial neural network" and has the capacity to learn from raw data without human intervention. In fact, their inner workings and self-learning capacities are so advanced, they can escape the control and understanding of their own operators and programmers. “Although the idea of an artificial neural network has been around for decades, only recently did 'deep' networks emerge as successful techniques." Explains prof. David Martens, "They now lead to even superhuman results: for some tasks it is more accurate in recognizing objects in a picture than a human." The fulfillment of this extraordinary potential, however, is conditional on finding ways to explain why these machines make decisions. In the Insurance industry, for instance, where such systems have been shown to achieve better predictive performances compared to other state-of-the-art techniques, these explanations will be needed not only for customers, but also for customer-facing employees, managers, members of the science data team, and last but not least, regulators.



University of Antwerp




ORCID Open Researcher and Contributor ID, a unique and persistent identifier to researchers

“In its approach of explainable AI, the project will investigate the use of instance-based explanations (explaining the model for one prediction)”, specifies prof. Martens. These will be obtained using the counter-factual concept: what data, if it were not present, would have led to a different decision? “Instance-based explanations provide a data-driven explanation for a single prediction." Martens explains, "For example: what words/paragraphs said in a phone call to a help desk made the AI predict that the customer is not satisfied with his or her service?” So, for this example, the approach would consist in identifying which words, if they would not have been said in the call, would have led to a satisfied (or neutral) prediction. He further will look at how such explanations can be used to obtain general insights into the domain.

“To achieve ethical data science, transparency is at the basis of everything. Fairness and accountability depend on it. If you want to make sure your AI system doesn’t discriminate, it is crucial to be able to explain. I’m very excited to work on this issue with AXA. Not only does this collaboration provide us with large amounts of real-world data, it also ensures that the research and technical work we do will have a concrete impact.” Insists prof. Martens. Indeed, once the ExAI algorithms are developed, they will be validated through use cases within AXA, both in the house insurance sector: the first would be a model to predict the likely cost of household damage; the second, as in the previous example, to detect the satisfaction level of customers, based on calls, mail/letter data, complaint forms, etc.

AI is emerging as the defining technology of our age, opening social and economic perspectives we never thought possible. However, for the benefits to be truly and safely accessible, key challenges remain, one of which is transparency. The present JRI offers the rare opportunity of combining research and practitioner expertise in order to make both academic and practical advances in that area. Focusing on insurance business applications, the output of the project will greatly contribute to the safe and responsible usage of the most advanced AI technologies for real-world applications in the future.