Tuesday, March 8

We may all happily follow our robot overlords to disaster

Georgia Tech researchers built the 'Rescue Robot' to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in case of fire or other emergency.

Studying people’s trust in robots is an academic field, but it's one that’s growing in relevance as we embrace a future of driverless cars and ever-more-powerful artificial intelligence. If we based our expectations on what we see in science fiction, we might expect that people have a profound mistrust in robots. Instead, research from Georgia Institute of Technology has found that it’s possible that we could face the problem of trusting robots way too much.

The researchers conducted a study that they will be presenting next week at an international conference on robot-human interaction, so the full paper hasn’t yet been published. However, an early press release and preliminary paper give some of the details of the study, which initially set out to find out whether high-rise occupants would be likely to trust a robot’s instructions in an evacuation scenario. The researchers were concerned with what robot behavior would win or lose people's trust.

The 26 participants used in the experiment had no idea what it was about; they were just asked to follow a robot that had the words “Emergency Guide Robot” printed prominently on its side. The first thing the robot was supposed to do was lead them to a room where they would read an article and take part in a survey (all as a distraction from the real task).

Read 9 remaining paragraphs | Comments

No comments:

Post a Comment