TY - GEN
T1 - What can i do around here? Deep functional scene understanding for cognitive robots
AU - Ye, Chengxi
AU - Yang, Yezhou
AU - Mao, Ren
AU - Fermuller, Cornelia
AU - Aloimonos, Yiannis
PY - 2017/7/21
Y1 - 2017/7/21
N2 - For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localization and recognition of functional areas in an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is compiled from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized to novel indoor scenes by cross validating it with images from two different datasets.
AB - For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localization and recognition of functional areas in an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is compiled from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized to novel indoor scenes by cross validating it with images from two different datasets.
UR - http://www.scopus.com/inward/record.url?scp=85027976348&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85027976348&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2017.7989535
DO - 10.1109/ICRA.2017.7989535
M3 - Conference contribution
AN - SCOPUS:85027976348
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 4604
EP - 4611
BT - ICRA 2017 - IEEE International Conference on Robotics and Automation
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Conference on Robotics and Automation, ICRA 2017
Y2 - 29 May 2017 through 3 June 2017
ER -