Back to publications

Integrating 3D Features and Virtual Visual Servoing for Hand-Eye and Humanoid Robot Pose Estimation

Xavi Gratal, Christian Smith, Mårten Björkman, and Danica Kragic


To enable high precision grasping and manipulation of objects or collaborative object manipulation between several robots, the knowledge of the transformation between the robot hand and an object is paramount. Although the problem can be circumvented by various visual servoing approaches, it may not always be possible to equip a robot hand with suitable fiducial markers that are easily visible during task execution. In this paper, we propose an approach for vision-based pose estimation of a robot hand or full-body pose. The method is based on virtual visual servoing using a CAD model of the robot, and combines 2-D image features with depth features. The method can be applied to estimate either the pose of a robot hand or pose of the whole body given that its joint configuration is known. We present experimental results that show the performance of the approach as demonstrated on both a mobile humanoid robot and a stationary manipulator.

Keywords: Visual Servoing, humanoids


Valid HTML 4.01!
Last update: 2013-08-05