This is the public repository for our DOPE-Plus project. It is developed upon NVIDIA's Deep Object Pose Estimation.
This repository contains our source code for training, inference, and synthetic data generation using Blenderproc, which were built upon the original DOPE codebase.
We have tested our standalone training and inference scripts on Ubuntu 20.04 with Python 3.8.10, using NVIDIA RTX 4090 and A2000 GPUs.
We trained and tested DOPE-Plus with our HOPE-Syn&Real Dataset and the Synthetic Block Dataset. All synthetic images were generated with our enhanced data generation pipeline. The HOPE-Syn&Real Dataset also contains real images from the publicaly available HOPE dataset.
The HOPE dataset is a collection of RGBD images and video sequences with labeled 6-DoF poses for 28 toy grocery objects. The 3D models can be downloaded here.
In addition, we included our 3D texured model of the Block object.
If you found our work helpful, consider citing us with the following BibTeX reference:
@article{jeffrey2025deeprob,
title = {DOPE-Plus: Enhancements in Feature Extraction and Data Generation for 6D Pose Estimation},
author = {Chen, Jeffrey and Luo, Yuqiao and Yuan, Longzhen},
url = "https://github.com/jypipi/DOPE-Plus",
year = {2025}
}
Please cite the original DOPE as well:
@inproceedings{tremblay2018corl:dope,
author = {Jonathan Tremblay and Thang To and Balakumar Sundaralingam and Yu Xiang and Dieter Fox and Stan Birchfield},
title = {Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects},
booktitle = {Conference on Robot Learning (CoRL)},
url = "https://arxiv.org/abs/1809.10790",
year = 2018
}
Jeffrey Chen (jeffzc@umich.edu), Yuqiao Luo (joeluo@umich.edu), and Longzhen Yuan (longzhen@umich.edu)