The successful execution of grasping by a robot hand requires translation of visual information into control signals to the hand, which produce the desired spatial orientation and preshape for grasping an arbitrary object. This paper presents an approach to this problem based on separation of the task into two modules. A vision module is used to transform an image into a volumetric shape description using generalized cones. The data structure containing this geometric information becomes an input to the grasping module, which obtains a list of feasible grasping modes and a set of control signals for the robot hand. Features of both modules are discussed.
ASJC Scopus subject areas
- Control and Systems Engineering
- Modeling and Simulation
- Electrical and Electronic Engineering