6D Object Pose Tracking in Internet Videos for Robotic Manipulation

ICLR 2025
Equal contribution
1Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University in Prague 2Faculty of Electrical Engineering, Czech Technical University in Prague 3H Company

Robotic manipulation guided by an instructional video. Given an instructional video from the Internet, our approach: retrieves a visually similar mesh for the manipulated object in the video from a large mesh database, estimates the approximate 6D pose trajectory of the manipulated object across video frames, and transfers the object trajectory onto a 7-axis robotic manipulator.

Abstract

We seek to extract a temporally consistent 6D pose trajectory of a manipulated object from an Internet instructional video. This is a challenging set-up for current 6D pose estimation methods due to uncontrolled capturing conditions, subtle but dynamic object motions, and the fact that the exact mesh of the manipulated object is not known. To address these challenges, we present the following contributions. First, we develop a new method that estimates the 6D pose of any object in the input image without prior knowledge of the object itself. The method proceeds by (i) retrieving a CAD model similar to the depicted object from a large-scale model database, (ii) 6D aligning the retrieved CAD model with the input image, and (iii) grounding the absolute scale of the object with respect to the scene. Second, we extract smooth 6D object trajectories from Internet videos by carefully tracking the detected objects across video frames. The extracted object trajectories are then retargeted via trajectory optimization into the configuration space of a robotic manipulator. Third, we thoroughly evaluate and ablate our 6D pose estimation method on YCB-V and HOPE-Video datasets as well as a new dataset of instructional videos manually annotated with approximate 6D object trajectories. We demonstrate significant improvements over existing state-of-the-art RGB 6D pose estimation methods. Finally, we show that the 6D object motion estimated from Internet videos can be transferred to a 7-axis robotic manipulator both in a virtual simulator as well as in a real world set-up. We also successfully apply our method to egocentric videos taken from the EPIC-KITCHENS dataset, demonstrating potential for Embodied AI applications.

Approach Overview

Approach overview.

FreePose overview. Given an input RGB image, our method: (a) detects and segments objects present in the image, (b) retrieves similar meshes from a large-scale object database via patch-based retrieval, (c) estimates the absolute scale of depicted objects in the scene via LLM-based re-scaling, and (d) estimates the camera-to-object rotation \(R\) and translation \(t\) via alignment of the retrieved (approximate) mesh.

Qualitative Results

BibTeX

@inproceedings{ponimatkin2025d,
      title={{{6D}} {{Object}} {{Pose}} {{Tracking}} in {{Internet}} {{Videos}} for {{Robotic}} {{Manipulation}}},
      author={Georgy Ponimatkin and Martin C{\'\i}fka and Tomas Soucek and M{\'e}d{\'e}ric Fourmy and Yann Labb{\'e} and Vladimir Petrik and Josef Sivic},
      booktitle={The Thirteenth International Conference on Learning Representations},
      year={2025},
}
    

Acknowledgements

This work was partly supported by the Ministry of Education, Youth and Sports of the Czech Re- public through the e-INFRA CZ (ID:90254), and by the European Union's Horizon Europe projects AGIMUS (No. 101070165), euROBIN (No. 101070596), and ERC FRONTIER (No. 101097822). Views and opinions expressed are however those of the author(s) only and do not necessarily re- flect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. Georgy Ponimatkin and Martin Cífka were also partly supported by Grant Agency of the Czech Technical University in Prague under allocations SGS25/156/OHK3/3T/13 (GP) and SGS25/152/OHK3/3T/13 (MC).