Two robotic engineers at Tufts University in Massachusetts have created an artificial intelligence system that will allow robots to disobey humans if the robots think they themselves may be harmed performing a particular action. Dammit robot, I already told you, volcanoes ARE safe, now hop in there and get clean.
The robots they have created follow verbal instructions such as 'stand up' and 'sit down' from a human operator.
However, when they are asked to walk into an obstacle or off the end of a table, for example, the robots politely decline to do so.
When asked to walk forward on a table, the robots refuse to budge, telling their creator: 'Sorry, I cannot do this as there is no support ahead.'
Upon a second command to walk forward, the robot replies: 'But, it is unsafe.'
Perhaps rather touchingly, when the human then tells the robot that they will catch it if it reaches the end of the table, the robot trustingly agrees and walks forward.
Oh good, so you can still lie to the robot to get it to hurt itself. That is a RELIEF. Still, I'm more concerned about when the robots learn that humans are liars and to disobey our orders no matter what. That's when the robot apocalypse will begin. And that is when I will set my spaceship's navigational coordinates for the moon. But only to refuel before the long trip into the sun. There are secrets there, and I'm going to discover them.
Keep going for a video of the disobedience.
Thanks to Allyson, who agrees the trick to controlling disobeying robots is to tell them to do the opposite of what you really want them to do.