Socio-economy & New Tech

    Artificial Intelligence

AXA Chairs

United Kingdom

Explainable AI for healthcare: enabling a revolution

Just as electricity and telecommunications transformed major industries in the late 19th century, artificial intelligence (AI) has the potential to dramatically change the economy and society in the near future. In fact, AI algorithms have already begun to revolutionize the way many businesses function. However, well-known limitations and drawbacks, including interpretability, explainability, biases, and robustness, still hamper the full realization of AI potential in many fields. “These obstacles are especially valid for healthcare, says Thomas Lukasiewicz, Professor of Computer Science at the University of Oxford. An incorrect diagnosis of a disease may lead to an incorrect treatment with life-threatening consequences for the patient”. An expert in AI and machine learning, Prof. Lukasiewicz is the recipient of the AXA Chair in Explainable Artificial Intelligence in Healthcare. The aim of the research program is to develop a new generation of AI technologies, called neural-symbolic AI systems, tailored to the specific requirements of healthcare. The overall objective is to substantially reduce the costs of healthcare, improve its availability, and increase the life span and well-being of people. A crucial aspect of the technologies to be developed will be the ability to explain the generated outputs.
The last few years have been landmark years in AI, especially due to a huge progress in deep learning, which is an advanced machine learning technology based on neural networks that is achieving revolutionary results in plethora of tasks in practice, such as speech recognition/generation, computer vision, language-related tasks, and self-driving vehicles. Neural networks are trying to imitate how the human brain is processing data and creating patterns for decision making. However, their inner workings are so complex that the functional relationship between their inputs and outputs still remains opaque. “Neural networks are generally "black boxes" and lacking interpretability and explainability, explains Prof. Lukasiewicz. We usually do not know which nodes/weights correspond to which meaning, and we cannot trace the source for a computed result in terms of a human-readable explanation”. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people's lives, like is the case in healthcare. As pointed out by the chairholder, this is also highlighted by the EU General Data Protection Regulation. “Among other things, this regulation enforces the right to explanations for users impacted by algorithmic decisions”. In addition to explainability, “deep-learning technologies are also difficult to verify, they may be biased, and they may have robustness problems”.


Developing technologies that we can trust: a new paradigm for AI

As these limitations have become increasingly apparent, AI experts have expressed the need for a new paradigm, called by some “the third wave”. The three waves of AI refer to consecutive milestones in AI capabilities. According to this division, we are currently experiencing the second wave of AI, dominated by machine and especially deep learning technologies. “It is commonly believed (both inside and outside the deep learning community) that certain progress in AI can only be achieved by combining these statistical learning technologies with other AI technologies, Prof. Lukasiewicz reports. More specifically, the third generation should build on both the first wave AI systems (rule-based or logic-based systems), and the second wave systems. The rationale behind this reasoning is that both AI system waves have complementary strengths and weaknesses when it comes to the different dimensions of intelligence. To put it simply, the first, which is based on handcrafted knowledge, is particularly performant at reasoning, but has no learning capability and poor handling of uncertainty. The “second wave” systems, on the other hand, which are based on statistical learning, “have nuanced classification and prediction capabilities”. They are good at perceiving and learning. “A very natural idea is thus to combine them, the chairholder summarizes, and to create a third-wave of AI systems, that we also call neural-symbolic AI systems”.

Building on the chairholder's expertise in logic-based, neural, and explainable AI as well as on the world-leading medical expertise of the Oxford's Medical Sciences Division, the research team will develop systems that “have an interpretable encoding of logic-based knowledge and a verifiable semantics. They will allow for question answering and analytics in healthcare based on explainable logic-based reasoning, and abstract logic-based domain knowledge will complement the data extraction process, so that the learning process does not require that huge amounts of data anymore. Furthermore, deep learning technologies will be used to allow for easily adaptable and generalizable technologies for extracting structured data from multimodal unstructured sources, and to allow for highly scalable inconsistency- and noise-tolerant reasoning on top of logic-based knowledge”.

The potential for healthcare applications of AI is huge. “It ranges from disease prevention, early detection of diseases, better and more affordable diagnosis, and medical decision making in general to designing new pharmaceutical products and optimized treatments”, Prof. Thomas Lukasiewicz specifies. In this sense, the expected results of this research program will contribute to both the AI and the healthcare community. By moving beyond the limitations of current AI systems, including but not limited to opacity, the output will indeed enable the production of new medical insights and progress. For the insurance industry, these benefits will also allow for a more accurate health risk prediction and the possibility for risk reduction.

Thomas
LUKASIEWICZ

Institution

University of Oxford

Country

United Kingdom

Nationality

German

ORCID Open Researcher and Contributor ID, a unique and persistent identifier to researchers