From the lab to the desert

Fast prototyping and learning of robot locomotion

Kevin Sebastian Luck, Joseph Campbell, Michael Andrew Jansen, Daniel Aukes, Hani Ben Amor

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

We present a methodology for fast prototyping of morphologies and controllers for robot locomotion. Going beyond simulation-based approaches, we argue that the form and function of a robot, as well as their interplay with realworld environmental conditions are critical. Hence, fast design and learning cycles are necessary to adapt robot shape and behavior to their environment. To this end, we present a combination of laminate robot manufacturing and sampleefficient reinforcement learning. We leverage this methodology to conduct an extensive robot learning experiment. Inspired by locomotion in sea turtles, we design a low-cost crawling robot with variable, interchangeable fins. Learning is performed using both bio-inspired and original fin designs in an artificial indoor environment as well as a natural environment in the Arizona desert. The findings of this study show that static policies developed in the laboratory do not translate to effective locomotion strategies in natural environments. In contrast to that, sample-efficient reinforcement learning can help to rapidly accommodate changes in the environment or the robot.

Original languageEnglish (US)
Title of host publicationRobotics
Subtitle of host publicationScience and Systems XIII, RSS 2017
PublisherMIT Press Journals
Volume13
ISBN (Electronic)9780992374730
StatePublished - Jan 1 2017
Event2017 Robotics: Science and Systems, RSS 2017 - Cambridge, United States
Duration: Jul 12 2017Jul 16 2017

Other

Other2017 Robotics: Science and Systems, RSS 2017
CountryUnited States
CityCambridge
Period7/12/177/16/17

Fingerprint

Robots
Reinforcement learning
Robot learning
Laminates
Controllers
Costs
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Cite this

Luck, K. S., Campbell, J., Jansen, M. A., Aukes, D., & Ben Amor, H. (2017). From the lab to the desert: Fast prototyping and learning of robot locomotion. In Robotics: Science and Systems XIII, RSS 2017 (Vol. 13). MIT Press Journals.

From the lab to the desert : Fast prototyping and learning of robot locomotion. / Luck, Kevin Sebastian; Campbell, Joseph; Jansen, Michael Andrew; Aukes, Daniel; Ben Amor, Hani.

Robotics: Science and Systems XIII, RSS 2017. Vol. 13 MIT Press Journals, 2017.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Luck, KS, Campbell, J, Jansen, MA, Aukes, D & Ben Amor, H 2017, From the lab to the desert: Fast prototyping and learning of robot locomotion. in Robotics: Science and Systems XIII, RSS 2017. vol. 13, MIT Press Journals, 2017 Robotics: Science and Systems, RSS 2017, Cambridge, United States, 7/12/17.
Luck KS, Campbell J, Jansen MA, Aukes D, Ben Amor H. From the lab to the desert: Fast prototyping and learning of robot locomotion. In Robotics: Science and Systems XIII, RSS 2017. Vol. 13. MIT Press Journals. 2017
Luck, Kevin Sebastian ; Campbell, Joseph ; Jansen, Michael Andrew ; Aukes, Daniel ; Ben Amor, Hani. / From the lab to the desert : Fast prototyping and learning of robot locomotion. Robotics: Science and Systems XIII, RSS 2017. Vol. 13 MIT Press Journals, 2017.
@inproceedings{61cddb15b3894597a8f3ae93f12e612d,
title = "From the lab to the desert: Fast prototyping and learning of robot locomotion",
abstract = "We present a methodology for fast prototyping of morphologies and controllers for robot locomotion. Going beyond simulation-based approaches, we argue that the form and function of a robot, as well as their interplay with realworld environmental conditions are critical. Hence, fast design and learning cycles are necessary to adapt robot shape and behavior to their environment. To this end, we present a combination of laminate robot manufacturing and sampleefficient reinforcement learning. We leverage this methodology to conduct an extensive robot learning experiment. Inspired by locomotion in sea turtles, we design a low-cost crawling robot with variable, interchangeable fins. Learning is performed using both bio-inspired and original fin designs in an artificial indoor environment as well as a natural environment in the Arizona desert. The findings of this study show that static policies developed in the laboratory do not translate to effective locomotion strategies in natural environments. In contrast to that, sample-efficient reinforcement learning can help to rapidly accommodate changes in the environment or the robot.",
author = "Luck, {Kevin Sebastian} and Joseph Campbell and Jansen, {Michael Andrew} and Daniel Aukes and {Ben Amor}, Hani",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
volume = "13",
booktitle = "Robotics",
publisher = "MIT Press Journals",

}

TY - GEN

T1 - From the lab to the desert

T2 - Fast prototyping and learning of robot locomotion

AU - Luck, Kevin Sebastian

AU - Campbell, Joseph

AU - Jansen, Michael Andrew

AU - Aukes, Daniel

AU - Ben Amor, Hani

PY - 2017/1/1

Y1 - 2017/1/1

N2 - We present a methodology for fast prototyping of morphologies and controllers for robot locomotion. Going beyond simulation-based approaches, we argue that the form and function of a robot, as well as their interplay with realworld environmental conditions are critical. Hence, fast design and learning cycles are necessary to adapt robot shape and behavior to their environment. To this end, we present a combination of laminate robot manufacturing and sampleefficient reinforcement learning. We leverage this methodology to conduct an extensive robot learning experiment. Inspired by locomotion in sea turtles, we design a low-cost crawling robot with variable, interchangeable fins. Learning is performed using both bio-inspired and original fin designs in an artificial indoor environment as well as a natural environment in the Arizona desert. The findings of this study show that static policies developed in the laboratory do not translate to effective locomotion strategies in natural environments. In contrast to that, sample-efficient reinforcement learning can help to rapidly accommodate changes in the environment or the robot.

AB - We present a methodology for fast prototyping of morphologies and controllers for robot locomotion. Going beyond simulation-based approaches, we argue that the form and function of a robot, as well as their interplay with realworld environmental conditions are critical. Hence, fast design and learning cycles are necessary to adapt robot shape and behavior to their environment. To this end, we present a combination of laminate robot manufacturing and sampleefficient reinforcement learning. We leverage this methodology to conduct an extensive robot learning experiment. Inspired by locomotion in sea turtles, we design a low-cost crawling robot with variable, interchangeable fins. Learning is performed using both bio-inspired and original fin designs in an artificial indoor environment as well as a natural environment in the Arizona desert. The findings of this study show that static policies developed in the laboratory do not translate to effective locomotion strategies in natural environments. In contrast to that, sample-efficient reinforcement learning can help to rapidly accommodate changes in the environment or the robot.

UR - http://www.scopus.com/inward/record.url?scp=85048833702&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048833702&partnerID=8YFLogxK

M3 - Conference contribution

VL - 13

BT - Robotics

PB - MIT Press Journals

ER -