V-HOP

We fuse visual and haptic sensing to achieve accurate real-time in-hand object tracking.

The visual modality, based on FoundationPose, uses a visual encoder to process RGB-D observations (real and rendered) into feature maps, which are concatenated and refined through a ResBlock to produce visual embeddings. The haptic modality encodes a unified hand-object point cloud, derived from 9D hand \(\mathcal{P}_h\) and object \(\mathcal{P}_o\) point clouds, into a haptic embedding that captures hand-object interactions. These visual and haptic embeddings are processed by Transformer encoders to estimate 3D translation and rotation.
<div class="row justify-content-sm-center">
    <div class="col-sm-8 mt-3 mt-md-0">
        {% include figure.html path="assets/img/6.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
    </div>
    <div class="col-sm-4 mt-3 mt-md-0">
        {% include figure.html path="assets/img/11.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
    </div>
</div>

Related Works

2024

  1. IROS
    hypertaxel.gif
    HyperTaxel: Hyper-Resolution for Taxel-Based Tactile Signal Through Contrastive Learning
    In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024
    Unfortunately, we are unable to publish the code and the dataset per the company policy.

2023

  1. RA-L / ICRA
    vihope-animation.gif
    ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape Completion
    Hongyu LiSnehal DikhaleSoshi Iba, and Nawid Jamali
    IEEE Robotics and Automation Letters, 2023
    Unfortunately, we are unable to publish the code and the dataset per the company policy.
    Presented at ICRA 2024 in Yokohama, Japan :jp:.
    Presented at NeurIPS 2023 Workshop on Touch Processing in New Orleans, LA :us:.