TY - GEN
T1 - Robots that anticipate pain
T2 - 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017
AU - Sur, Indranil
AU - Ben Amor, Hani
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/13
Y1 - 2017/12/13
N2 - To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this paper, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub-networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling.
AB - To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this paper, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub-networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling.
UR - http://www.scopus.com/inward/record.url?scp=85041964363&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041964363&partnerID=8YFLogxK
U2 - 10.1109/IROS.2017.8206442
DO - 10.1109/IROS.2017.8206442
M3 - Conference contribution
AN - SCOPUS:85041964363
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 5541
EP - 5548
BT - IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 24 September 2017 through 28 September 2017
ER -