The successful execution of grasps by a robot hand requires a translation of visual information into control signals to the hand which produce the desired spatial orientation and preshape for an arbitrary object. An approach to this problem based on separation of the task into two modules is presented. A vision module is used to transform an image into a volumetric shape description using generalized cones. The data structure containing this geometric information becomes an input to the grasping module, which obtains a list of feasible grasp modes and a set of control signals for the robot hand. Various features of both modules are discussed.
|Original language||English (US)|
|Title of host publication||Unknown Host Publication Title|
|Number of pages||5|
|State||Published - Jan 1 1988|
ASJC Scopus subject areas