Towards a Watson that sees: Language-guided action recognition for robots

Ching L. Teo, Yezhou Yang, Hal Daumé, Cornelia Fermuller, Yiannis Aloimonos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

17 Citations (Scopus)

Abstract

For robots of the future to interact seamlessly with humans, they must be able to reason about their surroundings and take actions that are appropriate to the situation. Such reasoning is only possible when the robot has knowledge of how the World functions, which must either be learned or hard-coded. In this paper, we propose an approach that exploits language as an important resource of high-level knowledge that a robot can use, akin to IBM's Watson in Jeopardy!. In particular, we show how language can be leveraged to reduce the ambiguity that arises from recognizing actions involving hand-tools from video data. Starting from the premise that tools and actions are intrinsically linked, with one explaining the existence of the other, we trained a language model over a large corpus of English newswire text so that we can extract this relationship directly. This model is then used as a prior to select the best tool and action that explains the video. We formalize the approach in the context of 1) an unsupervised recognition and 2) a supervised classification scenario by an EM formulation for the former and integrating language features for the latter. Results are validated over a new hand-tool action dataset, and comparisons with state of the art STIP features showed significantly improved results when language is used. In addition, we discuss the implications of these results and how it provides a framework for integrating language into vision on other robotic applications.

Original languageEnglish (US)
Title of host publication2012 IEEE International Conference on Robotics and Automation, ICRA 2012
Pages374-381
Number of pages8
DOIs
StatePublished - 2012
Externally publishedYes

Fingerprint

Hand tools
Robots
Robotics

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

Teo, C. L., Yang, Y., Daumé, H., Fermuller, C., & Aloimonos, Y. (2012). Towards a Watson that sees: Language-guided action recognition for robots. In 2012 IEEE International Conference on Robotics and Automation, ICRA 2012 (pp. 374-381). [6224589] https://doi.org/10.1109/ICRA.2012.6224589

Towards a Watson that sees : Language-guided action recognition for robots. / Teo, Ching L.; Yang, Yezhou; Daumé, Hal; Fermuller, Cornelia; Aloimonos, Yiannis.

2012 IEEE International Conference on Robotics and Automation, ICRA 2012. 2012. p. 374-381 6224589.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Teo, CL, Yang, Y, Daumé, H, Fermuller, C & Aloimonos, Y 2012, Towards a Watson that sees: Language-guided action recognition for robots. in 2012 IEEE International Conference on Robotics and Automation, ICRA 2012., 6224589, pp. 374-381. https://doi.org/10.1109/ICRA.2012.6224589
Teo CL, Yang Y, Daumé H, Fermuller C, Aloimonos Y. Towards a Watson that sees: Language-guided action recognition for robots. In 2012 IEEE International Conference on Robotics and Automation, ICRA 2012. 2012. p. 374-381. 6224589 https://doi.org/10.1109/ICRA.2012.6224589
Teo, Ching L. ; Yang, Yezhou ; Daumé, Hal ; Fermuller, Cornelia ; Aloimonos, Yiannis. / Towards a Watson that sees : Language-guided action recognition for robots. 2012 IEEE International Conference on Robotics and Automation, ICRA 2012. 2012. pp. 374-381
@inproceedings{569cbb34b8f6450b8db5ed655a1b9b35,
title = "Towards a Watson that sees: Language-guided action recognition for robots",
abstract = "For robots of the future to interact seamlessly with humans, they must be able to reason about their surroundings and take actions that are appropriate to the situation. Such reasoning is only possible when the robot has knowledge of how the World functions, which must either be learned or hard-coded. In this paper, we propose an approach that exploits language as an important resource of high-level knowledge that a robot can use, akin to IBM's Watson in Jeopardy!. In particular, we show how language can be leveraged to reduce the ambiguity that arises from recognizing actions involving hand-tools from video data. Starting from the premise that tools and actions are intrinsically linked, with one explaining the existence of the other, we trained a language model over a large corpus of English newswire text so that we can extract this relationship directly. This model is then used as a prior to select the best tool and action that explains the video. We formalize the approach in the context of 1) an unsupervised recognition and 2) a supervised classification scenario by an EM formulation for the former and integrating language features for the latter. Results are validated over a new hand-tool action dataset, and comparisons with state of the art STIP features showed significantly improved results when language is used. In addition, we discuss the implications of these results and how it provides a framework for integrating language into vision on other robotic applications.",
author = "Teo, {Ching L.} and Yezhou Yang and Hal Daum{\'e} and Cornelia Fermuller and Yiannis Aloimonos",
year = "2012",
doi = "10.1109/ICRA.2012.6224589",
language = "English (US)",
isbn = "9781467314039",
pages = "374--381",
booktitle = "2012 IEEE International Conference on Robotics and Automation, ICRA 2012",

}

TY - GEN

T1 - Towards a Watson that sees

T2 - Language-guided action recognition for robots

AU - Teo, Ching L.

AU - Yang, Yezhou

AU - Daumé, Hal

AU - Fermuller, Cornelia

AU - Aloimonos, Yiannis

PY - 2012

Y1 - 2012

N2 - For robots of the future to interact seamlessly with humans, they must be able to reason about their surroundings and take actions that are appropriate to the situation. Such reasoning is only possible when the robot has knowledge of how the World functions, which must either be learned or hard-coded. In this paper, we propose an approach that exploits language as an important resource of high-level knowledge that a robot can use, akin to IBM's Watson in Jeopardy!. In particular, we show how language can be leveraged to reduce the ambiguity that arises from recognizing actions involving hand-tools from video data. Starting from the premise that tools and actions are intrinsically linked, with one explaining the existence of the other, we trained a language model over a large corpus of English newswire text so that we can extract this relationship directly. This model is then used as a prior to select the best tool and action that explains the video. We formalize the approach in the context of 1) an unsupervised recognition and 2) a supervised classification scenario by an EM formulation for the former and integrating language features for the latter. Results are validated over a new hand-tool action dataset, and comparisons with state of the art STIP features showed significantly improved results when language is used. In addition, we discuss the implications of these results and how it provides a framework for integrating language into vision on other robotic applications.

AB - For robots of the future to interact seamlessly with humans, they must be able to reason about their surroundings and take actions that are appropriate to the situation. Such reasoning is only possible when the robot has knowledge of how the World functions, which must either be learned or hard-coded. In this paper, we propose an approach that exploits language as an important resource of high-level knowledge that a robot can use, akin to IBM's Watson in Jeopardy!. In particular, we show how language can be leveraged to reduce the ambiguity that arises from recognizing actions involving hand-tools from video data. Starting from the premise that tools and actions are intrinsically linked, with one explaining the existence of the other, we trained a language model over a large corpus of English newswire text so that we can extract this relationship directly. This model is then used as a prior to select the best tool and action that explains the video. We formalize the approach in the context of 1) an unsupervised recognition and 2) a supervised classification scenario by an EM formulation for the former and integrating language features for the latter. Results are validated over a new hand-tool action dataset, and comparisons with state of the art STIP features showed significantly improved results when language is used. In addition, we discuss the implications of these results and how it provides a framework for integrating language into vision on other robotic applications.

UR - http://www.scopus.com/inward/record.url?scp=84864473231&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84864473231&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2012.6224589

DO - 10.1109/ICRA.2012.6224589

M3 - Conference contribution

AN - SCOPUS:84864473231

SN - 9781467314039

SP - 374

EP - 381

BT - 2012 IEEE International Conference on Robotics and Automation, ICRA 2012

ER -