Artificial Intelligence
AXA Awards
United Kingdom
AI : how human should humanoid robots look?
Experimenting with humanoid Pepper robots
Dr. Joanna Bryson and her team have experimented with non-humanoid robots in the past. The AXA Award on Responsible Artificial Intelligence allows them to test with humanoid robots this time, and compare the results with their previous findings. Dr Bryson’s group will use advanced humanoid Pepper robots in a variety of scenarios. Their first experiment is to test whether being able to see the goals of the robot in real time – via screens, for instance –, helps understand how AI works. « A window into the robot’s brain », as Dr. Joanna Bryson puts it. « Exposing users to the robot’s priorities and reasoning in a graphic user interface could be a solution, even for extremely humanoid robots ». The other experiments will consist of ordinary psychology experiments to see how people behave with a robot in the room.
Robots will play an increasingly important role in the future – at home, at work, in institutions, etc. While some argue that humanity will benefit greatly from the rise of the robots, others advocate caution. Human/robot relations might prove tricky, especially when it comes to increasingly human-like robots. By investigating how a robot’s appearance affects human/machine interactions and by testing ways to make their machine nature explicit, Dr. Bryson’s experiments will greatly contribute to our understanding of what humanoid robots should look like to allow safe use in the future. In addition to educating the people about robots, she and her team aim to affect policy. In particular, their objective is to contribute to answering some of the European Union’s concerns about robot ethics.
Joanna
BRYSON
Institution
University of Bath
Country
United Kingdom
Nationality
American
Related articles
Artificial Intelligence
AXA Chair
United Kingdom
Explainable AI for healthcare: enabling a revolution
Developing technologies that we can trust: a new paradigm for AI As these limitations have become increasingly apparent, AI experts... Read more
Thomas
LUKASIEWICZ
University of Oxford
Artificial Intelligence
Joint Research Initiative
Belgium
Fairness in AI: ending bias and discrimination
Garbage in, garbage out: how to ensure fairness-aware machine learning? When measuring fairness, a natural preliminary question to ask is... Read more
Toon
CALDERS
University of Antwerp
Artificial Intelligence
Joint Research Initiative
Belgium
Fulfilling the potential of AI: towards explainable deep learning
“In its approach of explainable AI, the project will investigate the use of instance-based explanations (explaining the model for one... Read more
David
MARTENS