Extracting bimanual synergies with reinforcement learning

Kevin Sebastian Luck, Hani Ben Amor

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Motor synergies are an important concept in human motor control. Through the co-activation of multiple muscles, complex motion involving many degrees-of-freedom can be generated. However, leveraging this concept in robotics typically entails using human data that may be incompatible for the kinematics of the robot. In this paper, our goal is to enable a robot to identify synergies for low-dimensional control using trial-and-error only. We discuss how synergies can be learned through latent space policy search and introduce an extension of the algorithm for the re-use of previously learned synergies for exploration. The application of the algorithm on a bimanual manipulation task for the Baxter robot shows that performance can be increased by reusing learned synergies intra-task when learning to lift objects. But the reuse of synergies between two tasks with different objects did not lead to a significant improvement.

Original languageEnglish (US)
Title of host publicationIROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4805-4812
Number of pages8
Volume2017-September
ISBN (Electronic)9781538626825
DOIs
StatePublished - Dec 13 2017
Event2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017 - Vancouver, Canada
Duration: Sep 24 2017Sep 28 2017

Other

Other2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017
CountryCanada
CityVancouver
Period9/24/179/28/17

Fingerprint

Reinforcement learning
Robots
Muscle
Kinematics
Robotics
Chemical activation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Luck, K. S., & Ben Amor, H. (2017). Extracting bimanual synergies with reinforcement learning. In IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems (Vol. 2017-September, pp. 4805-4812). [8206356] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IROS.2017.8206356

Extracting bimanual synergies with reinforcement learning. / Luck, Kevin Sebastian; Ben Amor, Hani.

IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September Institute of Electrical and Electronics Engineers Inc., 2017. p. 4805-4812 8206356.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Luck, KS & Ben Amor, H 2017, Extracting bimanual synergies with reinforcement learning. in IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. vol. 2017-September, 8206356, Institute of Electrical and Electronics Engineers Inc., pp. 4805-4812, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Vancouver, Canada, 9/24/17. https://doi.org/10.1109/IROS.2017.8206356
Luck KS, Ben Amor H. Extracting bimanual synergies with reinforcement learning. In IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September. Institute of Electrical and Electronics Engineers Inc. 2017. p. 4805-4812. 8206356 https://doi.org/10.1109/IROS.2017.8206356
Luck, Kevin Sebastian ; Ben Amor, Hani. / Extracting bimanual synergies with reinforcement learning. IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September Institute of Electrical and Electronics Engineers Inc., 2017. pp. 4805-4812
@inproceedings{75f30be287bb4ec590ea00b769061813,
title = "Extracting bimanual synergies with reinforcement learning",
abstract = "Motor synergies are an important concept in human motor control. Through the co-activation of multiple muscles, complex motion involving many degrees-of-freedom can be generated. However, leveraging this concept in robotics typically entails using human data that may be incompatible for the kinematics of the robot. In this paper, our goal is to enable a robot to identify synergies for low-dimensional control using trial-and-error only. We discuss how synergies can be learned through latent space policy search and introduce an extension of the algorithm for the re-use of previously learned synergies for exploration. The application of the algorithm on a bimanual manipulation task for the Baxter robot shows that performance can be increased by reusing learned synergies intra-task when learning to lift objects. But the reuse of synergies between two tasks with different objects did not lead to a significant improvement.",
author = "Luck, {Kevin Sebastian} and {Ben Amor}, Hani",
year = "2017",
month = "12",
day = "13",
doi = "10.1109/IROS.2017.8206356",
language = "English (US)",
volume = "2017-September",
pages = "4805--4812",
booktitle = "IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Extracting bimanual synergies with reinforcement learning

AU - Luck, Kevin Sebastian

AU - Ben Amor, Hani

PY - 2017/12/13

Y1 - 2017/12/13

N2 - Motor synergies are an important concept in human motor control. Through the co-activation of multiple muscles, complex motion involving many degrees-of-freedom can be generated. However, leveraging this concept in robotics typically entails using human data that may be incompatible for the kinematics of the robot. In this paper, our goal is to enable a robot to identify synergies for low-dimensional control using trial-and-error only. We discuss how synergies can be learned through latent space policy search and introduce an extension of the algorithm for the re-use of previously learned synergies for exploration. The application of the algorithm on a bimanual manipulation task for the Baxter robot shows that performance can be increased by reusing learned synergies intra-task when learning to lift objects. But the reuse of synergies between two tasks with different objects did not lead to a significant improvement.

AB - Motor synergies are an important concept in human motor control. Through the co-activation of multiple muscles, complex motion involving many degrees-of-freedom can be generated. However, leveraging this concept in robotics typically entails using human data that may be incompatible for the kinematics of the robot. In this paper, our goal is to enable a robot to identify synergies for low-dimensional control using trial-and-error only. We discuss how synergies can be learned through latent space policy search and introduce an extension of the algorithm for the re-use of previously learned synergies for exploration. The application of the algorithm on a bimanual manipulation task for the Baxter robot shows that performance can be increased by reusing learned synergies intra-task when learning to lift objects. But the reuse of synergies between two tasks with different objects did not lead to a significant improvement.

UR - http://www.scopus.com/inward/record.url?scp=85041953097&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041953097&partnerID=8YFLogxK

U2 - 10.1109/IROS.2017.8206356

DO - 10.1109/IROS.2017.8206356

M3 - Conference contribution

VL - 2017-September

SP - 4805

EP - 4812

BT - IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems

PB - Institute of Electrical and Electronics Engineers Inc.

ER -