Latent space policy search for robotics

Kevin Sebastian Luck, Gerhard Neumann, Erik Berger, Jan Peters, Hani Ben Amor

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

Learning motor skills for robots is a hard task. In particular, a high number of degrees-of-freedom in the robot can pose serious challenges to existing reinforcement learning methods, since it leads to a high-dimensional search space. However, complex robots are often intrinsically redundant systems and, therefore, can be controlled using a latent manifold of much smaller dimensionality. In this paper, we present a novel policy search method that performs efficient reinforcement learning by uncovering the low-dimensional latent space of actuator redundancies. In contrast to previous attempts at combining reinforcement learning and dimensionality reduction, our approach does not perform dimensionality reduction as a preprocessing step but naturally combines it with policy search. Our evaluations show that the new approach outperforms existing algorithms for learning motor skills with high-dimensional robots.

Original languageEnglish (US)
Title of host publicationIROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1434-1440
Number of pages7
ISBN (Print)9781479969340
DOIs
StatePublished - Oct 31 2014
Externally publishedYes
Event2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014 - Chicago, United States
Duration: Sep 14 2014Sep 18 2014

Other

Other2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014
CountryUnited States
CityChicago
Period9/14/149/18/14

Fingerprint

Reinforcement learning
Robotics
Robots
Redundancy
Actuators

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Luck, K. S., Neumann, G., Berger, E., Peters, J., & Ben Amor, H. (2014). Latent space policy search for robotics. In IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1434-1440). [6942745] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IROS.2014.6942745

Latent space policy search for robotics. / Luck, Kevin Sebastian; Neumann, Gerhard; Berger, Erik; Peters, Jan; Ben Amor, Hani.

IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems. Institute of Electrical and Electronics Engineers Inc., 2014. p. 1434-1440 6942745.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Luck, KS, Neumann, G, Berger, E, Peters, J & Ben Amor, H 2014, Latent space policy search for robotics. in IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems., 6942745, Institute of Electrical and Electronics Engineers Inc., pp. 1434-1440, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, Chicago, United States, 9/14/14. https://doi.org/10.1109/IROS.2014.6942745
Luck KS, Neumann G, Berger E, Peters J, Ben Amor H. Latent space policy search for robotics. In IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems. Institute of Electrical and Electronics Engineers Inc. 2014. p. 1434-1440. 6942745 https://doi.org/10.1109/IROS.2014.6942745
Luck, Kevin Sebastian ; Neumann, Gerhard ; Berger, Erik ; Peters, Jan ; Ben Amor, Hani. / Latent space policy search for robotics. IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 1434-1440
@inproceedings{1c08605fb8214be0a0ffd7d567270a01,
title = "Latent space policy search for robotics",
abstract = "Learning motor skills for robots is a hard task. In particular, a high number of degrees-of-freedom in the robot can pose serious challenges to existing reinforcement learning methods, since it leads to a high-dimensional search space. However, complex robots are often intrinsically redundant systems and, therefore, can be controlled using a latent manifold of much smaller dimensionality. In this paper, we present a novel policy search method that performs efficient reinforcement learning by uncovering the low-dimensional latent space of actuator redundancies. In contrast to previous attempts at combining reinforcement learning and dimensionality reduction, our approach does not perform dimensionality reduction as a preprocessing step but naturally combines it with policy search. Our evaluations show that the new approach outperforms existing algorithms for learning motor skills with high-dimensional robots.",
author = "Luck, {Kevin Sebastian} and Gerhard Neumann and Erik Berger and Jan Peters and {Ben Amor}, Hani",
year = "2014",
month = "10",
day = "31",
doi = "10.1109/IROS.2014.6942745",
language = "English (US)",
isbn = "9781479969340",
pages = "1434--1440",
booktitle = "IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Latent space policy search for robotics

AU - Luck, Kevin Sebastian

AU - Neumann, Gerhard

AU - Berger, Erik

AU - Peters, Jan

AU - Ben Amor, Hani

PY - 2014/10/31

Y1 - 2014/10/31

N2 - Learning motor skills for robots is a hard task. In particular, a high number of degrees-of-freedom in the robot can pose serious challenges to existing reinforcement learning methods, since it leads to a high-dimensional search space. However, complex robots are often intrinsically redundant systems and, therefore, can be controlled using a latent manifold of much smaller dimensionality. In this paper, we present a novel policy search method that performs efficient reinforcement learning by uncovering the low-dimensional latent space of actuator redundancies. In contrast to previous attempts at combining reinforcement learning and dimensionality reduction, our approach does not perform dimensionality reduction as a preprocessing step but naturally combines it with policy search. Our evaluations show that the new approach outperforms existing algorithms for learning motor skills with high-dimensional robots.

AB - Learning motor skills for robots is a hard task. In particular, a high number of degrees-of-freedom in the robot can pose serious challenges to existing reinforcement learning methods, since it leads to a high-dimensional search space. However, complex robots are often intrinsically redundant systems and, therefore, can be controlled using a latent manifold of much smaller dimensionality. In this paper, we present a novel policy search method that performs efficient reinforcement learning by uncovering the low-dimensional latent space of actuator redundancies. In contrast to previous attempts at combining reinforcement learning and dimensionality reduction, our approach does not perform dimensionality reduction as a preprocessing step but naturally combines it with policy search. Our evaluations show that the new approach outperforms existing algorithms for learning motor skills with high-dimensional robots.

UR - http://www.scopus.com/inward/record.url?scp=84911500552&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84911500552&partnerID=8YFLogxK

U2 - 10.1109/IROS.2014.6942745

DO - 10.1109/IROS.2014.6942745

M3 - Conference contribution

AN - SCOPUS:84911500552

SN - 9781479969340

SP - 1434

EP - 1440

BT - IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems

PB - Institute of Electrical and Electronics Engineers Inc.

ER -