TY - GEN
T1 - Co-active learning to adapt humanoid movement for manipulation
AU - Mao, Ren
AU - Baras, John S.
AU - Yang, Yezhou
AU - Fermüller, Cornelia
N1 - Funding Information:
This work was supported by DARPA (through ARO) grant W911NF1410384 and by NSF through grants CNS-1544787 and SMA-1540917.
Publisher Copyright:
© 2016 IEEE.
PY - 2016/12/30
Y1 - 2016/12/30
N2 - In this paper we address the problem of interactive robot movement adaptation under various environmental constraints. A common approach is to adopt motion primitives to generate target motions from demonstrations. However, their generalization capability is weak for novel environments. Additionally, traditional motion generation methods do not consider versatile constraints from different users, tasks, and environments. In this work, we propose a co-active learning framework for learning to adapt the movement of robot end-effectors for manipulation tasks. It is designed to adapt the original imitation trajectories, which are learned from demonstrations, to novel situations with different constraints. The framework also considers user feedback towards the adapted trajectories, and it learns to adapt movement through human-in-the-loop interactions. Experiments on a humanoid platform validate the effectiveness of our approach.
AB - In this paper we address the problem of interactive robot movement adaptation under various environmental constraints. A common approach is to adopt motion primitives to generate target motions from demonstrations. However, their generalization capability is weak for novel environments. Additionally, traditional motion generation methods do not consider versatile constraints from different users, tasks, and environments. In this work, we propose a co-active learning framework for learning to adapt the movement of robot end-effectors for manipulation tasks. It is designed to adapt the original imitation trajectories, which are learned from demonstrations, to novel situations with different constraints. The framework also considers user feedback towards the adapted trajectories, and it learns to adapt movement through human-in-the-loop interactions. Experiments on a humanoid platform validate the effectiveness of our approach.
UR - http://www.scopus.com/inward/record.url?scp=85010203554&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85010203554&partnerID=8YFLogxK
U2 - 10.1109/HUMANOIDS.2016.7803303
DO - 10.1109/HUMANOIDS.2016.7803303
M3 - Conference contribution
AN - SCOPUS:85010203554
T3 - IEEE-RAS International Conference on Humanoid Robots
SP - 372
EP - 378
BT - Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots
PB - IEEE Computer Society
T2 - 16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016
Y2 - 15 November 2016 through 17 November 2016
ER -