ethnhe / PVN3D

Code for "PVN3D: A Deep Point-wise 3D Keypoints Hough Voting Network for 6DoF Pose Estimation", CVPR 2020
MIT License
488 stars 105 forks source link

How to visualize with only RGB and Depth? #68

Open snijders-tjm opened 3 years ago

snijders-tjm commented 3 years ago

Hello,

Thank you for publishing your code, it is very interesting.

From your paper I read that PVN3D only needs RGB and Depth to pose estimate, now I want to test this with my own dataset using objects from either LineMOD or YCB (so that I use the pretrained model) with only RGB and depth pictures.

Is it possible to, for instance, alter the demo.py to work with only that information or does that require more adaptations? If so, how could I do that?

Thank you in advance!

ethnhe commented 3 years ago

Yes, it's possible. You can modify the dataset preprocess scripts, datasets/ycb/ycb_dataset.py or datasets/linemod/lm_dataset.py to preprocess your own RGBD images and then fed them into the demo.py script to get the result. You also need to modify the intrinsix matrix to be the intrinsix matrix of your camera in these two scripts.

hyg2sunshine commented 3 years ago

@ethnhe I tried to change demo.py and ycb_dataset.py, but when I put only color.png and depth.png in the test image folder, an error would be reported, like the following: Traceback (most recent call last): File "demo_test.py", line 170, in main() File "demo_test.py", line 161, in main enumerate(test_loader), leave=False, desc="val" File "/home/hyg/.local/lib/python3.6/site-packages/tqdm/std.py", line 1178, in iter for obj in iterable: File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/hyg/anaconda3/envs/pvn3d/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 81, in default_collate raise TypeError(default_collate_err_msg_format.format(elem_type)) TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>

Only when I add the corresponding label.png and meta.mat under the path can it be detected normally. So is there anything else that needs to be modified? I hope you can help me answer, thank you very much!

andreazuna89 commented 2 years ago

Hi all! were you able to run the demo script with your own RGBD data? How can we generate the meta data for our own data? Is there a way to use only RGBD data and label file as input? the label file can be easily generated but the meta data is not so easy.

Thanks

ghost commented 2 years ago

Hey @snijders-tjm @hyg2sunshine, did you succeed ? Would love to have a feedback on this ! Thanks