ROBOT HAND-EYE COORDINATION: SHAPE DESCRIPTION AND GRASPING.

K. Rao, G. Medioni, H. Liu, G. A. Bekey

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Scopus citations

Abstract

The successful execution of grasps by a robot hand requires a translation of visual information into control signals to the hand which produce the desired spatial orientation and preshape for an arbitrary object. An approach to this problem based on separation of the task into two modules is presented. A vision module is used to transform an image into a volumetric shape description using generalized cones. The data structure containing this geometric information becomes an input to the grasping module, which obtains a list of feasible grasp modes and a set of control signals for the robot hand. Various features of both modules are discussed.

Original languageEnglish (US)
Title of host publicationUnknown Host Publication Title
PublisherIEEE
Pages407-411
Number of pages5
ISBN (Print)0818608528
StatePublished - Jan 1 1988
Externally publishedYes

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'ROBOT HAND-EYE COORDINATION: SHAPE DESCRIPTION AND GRASPING.'. Together they form a unique fingerprint.

Cite this