Content area

Abstract

Humanoid robots have complex kinematic chains whose modeling is error prone. If the robot model is not well calibrated, its hand pose cannot be determined precisely from the encoder readings, and this affects reaching and grasping accuracy. In our work, we propose a novel method to simultaneously i) estimate the pose of the robot hand, and ii) calibrate the robot kinematic model. This is achieved by combining stereo vision, proprioception, and a 3D computer graphics model of the robot. Notably, the use of GPU programming allows to perform the estimation and calibration in real time during the execution of arm reaching movements. Proprioceptive information is exploited to generate hypotheses about the visual appearance of the hand in the camera images, using the 3D computer graphics model of the robot that includes both kinematic and texture information. These hypotheses are compared with the actual visual input using particle filtering, to obtain both i) the best estimate of the hand pose and ii) a set of joint offsets to calibrate the kinematics of the robot model. We evaluate two different approaches to estimate the 6D pose of the hand from vision (silhouette segmentation and edges extraction) and show experimentally that the pose estimation error is considerably reduced with respect to the nominal robot model. Moreover, the GPU implementation ensures a performance about 3 times faster than the CPU one, allowing real-time operation.

Details

Title
Robotic Hand Pose Estimation Based on Stereo Vision and GPU-enabled Internal Graphical Simulation
Author
Vicente, Pedro; Jamone, Lorenzo; Bernardino, Alexandre
Pages
339-358
Publication year
2016
Publication date
Sep 2016
Publisher
Springer Nature B.V.
ISSN
09210296
e-ISSN
15730409
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
1828248221
Copyright
Springer Science+Business Media Dordrecht 2016