Maximally informative interaction learning for scene exploration

Herke Van Hoof, Oliver Kroemer, Hani Ben Amor, Jan Peters

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)

Abstract

Creating robots that can act autonomously in dynamic, unstructured environments is a major challenge. In such environments, learning to recognize and manipulate novel objects is an important capability. A truly autonomous robot acquires knowledge through interaction with its environment without using heuristics or prior information encoding human domain insights. Static images often provide insufficient information for inferring the relevant properties of the objects in a scene. Hence, a robot needs to explore these objects by interacting with them. However, there may be many exploratory actions possible, and a large portion of these actions may be non-informative. To learn quickly and efficiently, a robot must select actions that are expected to have the most informative outcomes. In the proposed bottom-up approach, the robot achieves this goal by quantifying the expected informativeness of its own actions. We use this approach to segment a scene into its constituent objects as a first step in learning the properties and affordances of objects. Evaluations showed that the proposed information-theoretic approach allows a robot to efficiently infer the composite structure of its environment.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Pages5152-5158
Number of pages7
DOIs
StatePublished - 2012
Externally publishedYes
Event25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012 - Vilamoura, Algarve, Portugal
Duration: Oct 7 2012Oct 12 2012

Other

Other25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012
CountryPortugal
CityVilamoura, Algarve
Period10/7/1210/12/12

Fingerprint

Robots
Composite structures

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Van Hoof, H., Kroemer, O., Ben Amor, H., & Peters, J. (2012). Maximally informative interaction learning for scene exploration. In IEEE International Conference on Intelligent Robots and Systems (pp. 5152-5158). [6386008] https://doi.org/10.1109/IROS.2012.6386008

Maximally informative interaction learning for scene exploration. / Van Hoof, Herke; Kroemer, Oliver; Ben Amor, Hani; Peters, Jan.

IEEE International Conference on Intelligent Robots and Systems. 2012. p. 5152-5158 6386008.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Van Hoof, H, Kroemer, O, Ben Amor, H & Peters, J 2012, Maximally informative interaction learning for scene exploration. in IEEE International Conference on Intelligent Robots and Systems., 6386008, pp. 5152-5158, 25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012, Vilamoura, Algarve, Portugal, 10/7/12. https://doi.org/10.1109/IROS.2012.6386008
Van Hoof H, Kroemer O, Ben Amor H, Peters J. Maximally informative interaction learning for scene exploration. In IEEE International Conference on Intelligent Robots and Systems. 2012. p. 5152-5158. 6386008 https://doi.org/10.1109/IROS.2012.6386008
Van Hoof, Herke ; Kroemer, Oliver ; Ben Amor, Hani ; Peters, Jan. / Maximally informative interaction learning for scene exploration. IEEE International Conference on Intelligent Robots and Systems. 2012. pp. 5152-5158
@inproceedings{24351e466a2f4a95b657982d31777223,
title = "Maximally informative interaction learning for scene exploration",
abstract = "Creating robots that can act autonomously in dynamic, unstructured environments is a major challenge. In such environments, learning to recognize and manipulate novel objects is an important capability. A truly autonomous robot acquires knowledge through interaction with its environment without using heuristics or prior information encoding human domain insights. Static images often provide insufficient information for inferring the relevant properties of the objects in a scene. Hence, a robot needs to explore these objects by interacting with them. However, there may be many exploratory actions possible, and a large portion of these actions may be non-informative. To learn quickly and efficiently, a robot must select actions that are expected to have the most informative outcomes. In the proposed bottom-up approach, the robot achieves this goal by quantifying the expected informativeness of its own actions. We use this approach to segment a scene into its constituent objects as a first step in learning the properties and affordances of objects. Evaluations showed that the proposed information-theoretic approach allows a robot to efficiently infer the composite structure of its environment.",
author = "{Van Hoof}, Herke and Oliver Kroemer and {Ben Amor}, Hani and Jan Peters",
year = "2012",
doi = "10.1109/IROS.2012.6386008",
language = "English (US)",
isbn = "9781467317375",
pages = "5152--5158",
booktitle = "IEEE International Conference on Intelligent Robots and Systems",

}

TY - GEN

T1 - Maximally informative interaction learning for scene exploration

AU - Van Hoof, Herke

AU - Kroemer, Oliver

AU - Ben Amor, Hani

AU - Peters, Jan

PY - 2012

Y1 - 2012

N2 - Creating robots that can act autonomously in dynamic, unstructured environments is a major challenge. In such environments, learning to recognize and manipulate novel objects is an important capability. A truly autonomous robot acquires knowledge through interaction with its environment without using heuristics or prior information encoding human domain insights. Static images often provide insufficient information for inferring the relevant properties of the objects in a scene. Hence, a robot needs to explore these objects by interacting with them. However, there may be many exploratory actions possible, and a large portion of these actions may be non-informative. To learn quickly and efficiently, a robot must select actions that are expected to have the most informative outcomes. In the proposed bottom-up approach, the robot achieves this goal by quantifying the expected informativeness of its own actions. We use this approach to segment a scene into its constituent objects as a first step in learning the properties and affordances of objects. Evaluations showed that the proposed information-theoretic approach allows a robot to efficiently infer the composite structure of its environment.

AB - Creating robots that can act autonomously in dynamic, unstructured environments is a major challenge. In such environments, learning to recognize and manipulate novel objects is an important capability. A truly autonomous robot acquires knowledge through interaction with its environment without using heuristics or prior information encoding human domain insights. Static images often provide insufficient information for inferring the relevant properties of the objects in a scene. Hence, a robot needs to explore these objects by interacting with them. However, there may be many exploratory actions possible, and a large portion of these actions may be non-informative. To learn quickly and efficiently, a robot must select actions that are expected to have the most informative outcomes. In the proposed bottom-up approach, the robot achieves this goal by quantifying the expected informativeness of its own actions. We use this approach to segment a scene into its constituent objects as a first step in learning the properties and affordances of objects. Evaluations showed that the proposed information-theoretic approach allows a robot to efficiently infer the composite structure of its environment.

UR - http://www.scopus.com/inward/record.url?scp=84872319695&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84872319695&partnerID=8YFLogxK

U2 - 10.1109/IROS.2012.6386008

DO - 10.1109/IROS.2012.6386008

M3 - Conference contribution

SN - 9781467317375

SP - 5152

EP - 5158

BT - IEEE International Conference on Intelligent Robots and Systems

ER -