Capturing and reproducing hand-objects interactions would open considerable possibilities in computer vision, human-computer interfaces, robotics, animation and rehabilitation. Recently, we witnessed impressive vision-based hand tracking solutions that can potentially be used for such purposes. Yet, a challenging question is: to what extent can vision also capture haptic interactions? These induce motions and constraints that are key for learning and understanding tasks, such as dexterous grasping, manipulation and assembly, as well as enabling their reproduction from either virtual characters or physical embodiments. Contact forces are traditionally measured by means of haptic technologies such as force transducers, whose major drawback lies in their intrusiveness, with respect to the manipulated objects (impacting their physical properties) and the operator’s hands (obstructing the human haptic senses). Others include their extensive need for calibration, time-varying accuracy and cost. In this paper, we present the force sensing from vision framework to capture haptic interaction by means of a cheap and simple set-up (e.g., a single RGB-D camera). We then illustrate its use as an implicit force model improving the reproduction of hand-object manipulation scenarios even in poor performance visual tracking conditions.
Reference
[pdf]
@inproceedings{oui:pham:2015,
Author = {Pham, Tu-Hoa and Kheddar, Abderrahmane and Qammaz, Ammar and Argyros, Antonis A.},
Title = {Capturing and Reproducing Hand-Object Interactions Through Vision-Based Force Sensing},
Booktitle = {IEEE ICCV Workshop on Object Understanding for Interaction},
Year = {2015},
Organization = {IEEE}
}