Toward automatic robot instruction from perception-mapping human grasps to manipulator grasps

IEEE Transactions on robotics and automation | , Vol 13(1): pp. 81-95

Publication | Publication | Publication | Publication

Our approach of programming a robot is by direct human demonstration. The system observes a human performing the task, recognizes the human grasp, and maps it onto the manipulator. This paper describes how an observed human grasp can be mapped to that of a given general-purpose manipulator for task replication. Planning the manipulator grasp based upon the observed human grasp is done at two levels: the functional and physical levels. Initially, at the functional level, grasp mapping is achieved at the virtual finger level; the virtual finger is a group of fingers acting against an object surface in a similar manner. Subsequently, at the physical level, the geometric properties of the object and manipulator are considered in fine-tuning the manipulator grasp. Our work concentrates on power or enveloping grasps and the fingertip precision grasps. We conclude by showing an example of an entire programming cycle from human demonstration to robot execution.