Paving the way for the development of responsible AI
University of Southampton
« The last few years have seen a significant rise in the development of machine learning approaches with great success in specific areas such as game playing and time-series prediction for traffic monitoring, epidemiology, and disaster response applications », the researcher reports. « In fact, the pace of change in the field over the last decade has been too fast for those that use, operate, and regulate systems that end up using such AI-based solutions ». To illustrate this gap between theory and application, Dr. Ramchurn uses the example of autonomous UAVs (unmanned aerial vehicles), one of the key areas he will be focusing his investigation on. « The Civil Aviation Authority in the UK has struggled to design the rules for flights involving purely autonomous UAVs, let alone fleets of UAVs ! ». « Such systems are therefore liable to major failures that may negatively impact the organisations running them, but, more importantly, the end-users of such systems ». « Take Uber for instance. When they designed the system, they failed to account for the impact on the drivers. If drivers want to make any decent money, they need to work long hours and without any job security. Technology can completely change an industry and impact lives negatively », he presses. « Improving systems afterwards is not efficient enough. These issues need to be thought of before ». This is what the research program is about : building a methodology to ensure future algorithms are more responsible.
Allowing for AI and humans to work hand in hand
His programme will focus on two key application areas : the use of drones for disaster response and the use of IoT systems for energy conservation in smart homes. « Building on a number of existing results we already have on these two kinds of AI systems, we’re going to try and expose methods that allow these AI systems to make sense of what is going on around them and to take decisions we can trust. », Dr. Ramchurn explains. Among the questions that are investigated by the project are the design of algorithms accounting for the risks they expose others to, the design principles ensuring that interactions can be understood by the end-users, as well as fair and sustainable, the preservation of privacy, or the development of « responsibility » within the reasoning of ML systems. To provide answers to these questions, the research program collaborates with experts from a lot of different domains : social psychologists, ethnographers, computer/human interaction specialists, legal experts.
Scientists are constantly trying to find new ways to bridge the gap between man and machine. In the past, the effort has led to the invention of keyboards, mice and touch screens. Now, with AI, interactions have become infinitely more complex. « New issues have arisen, like humans and machines acting as equal members of a team, for instance ». With new AI technologies about to come out, it is urgent we come up with answers on how to ensure harmonious human/computer interactions. By aiming to develop a methodology for AI that is responsible, Dr. Ramchurn’s research addresses questions that are about to become of utmost importance. In this sense, the project aligns closely with existing AXA funded projects, including Prof. Christophe Marsala’s, Prof. Maurizio Filippone’s and Dr. Joanna Bryson’s.
Discover other projects on AI
Explainable AI for healthcare: enabling a revolution
Developing technologies that we can trust: a new paradigm for AI As these limitations have become increasingly apparent, AI experts... Read more
University of Oxford
Joint Research Initiative
Fairness in AI: ending bias and discrimination
Garbage in, garbage out: how to ensure fairness-aware machine learning? When measuring fairness, a natural preliminary question to ask is... Read more
University of Antwerp
Joint Research Initiative
Fulfilling the potential of AI: towards explainable deep learning
“In its approach of explainable AI, the project will investigate the use of instance-based explanations (explaining the model for one... Read more