Closed zc-alexfan closed 1 year ago
No, there is no plan for that.
I am curious if there is any plan to provide training codes related to other estimations (2D detection, 6d object pose estimation).
For 6D object pose, we have released the code for PoseCNN, DeepIM, and PoseRBPF: https://github.com/NVlabs/PoseCNN-PyTorch#training-and-testing-on-the-dexycb-dataset https://github.com/NVlabs/DeepIM-PyTorch#training-and-testing-on-the-dexycb-dataset https://github.com/NVlabs/PoseRBPF#testing-on-the-dexycb-dataset
As mentioned in the paper Sec. 5.3, we did not retrain PoseRBPF.
I have some simple question about the dataset and toolkit!
I observed your toolkit used the bop toolkit and I observed something in bop datasets.
They mentioned that the ground-truth poses are transformed by converted 3D models.
I understood that the 3D models and poses annotation of the existing YCB and YCB (bop) is slightly different, am I correct?
Q1. Can you please clarify with regards to your dataset(ycbdex) pose annotation and 3d models related to bop dataset?? (ex, dexycb annotation follows the original YCB or YCB(bop))
Q2. I want to check performance on dexycb dataset using pretrained CosyPose or PoseCNN trained from YCB or YCB(bop) without training dexycb dataset. Have you already tried this?? If yes, Can you share the results?? It would be really helpful.
Q3. Is there any reason a large clamp was removed compared to the YCB dataset??
"I understood that the 3D models and poses annotation of the existing YCB and YCB (bop) is slightly different, am I correct?" -> Correct.
Q1. Can you please clarify with regards to your dataset(ycbdex) pose annotation and 3d models related to bop dataset?? (ex, dexycb annotation follows the original YCB or YCB(bop)) -> We provide both.
pose.npz
(e.g., loaded in the dex-ycb-toolkit API, such as in this example) uses the YCB-Video models. If you download the DexYCB dataset, you can also find a copy of these models under models/
.bop/
. In fact, bop/
is a copy of the full DexYCB dataset under the BOP format, which you can directly use with any methods that consume this format, e.g. CosyPose. If you look at bop/models/
and bop/models_eval/
, you'll find that these models are directly copied from YCB-V (BOP).Q2. I want to check performance on dexycb dataset using pretrained CosyPose or PoseCNN trained from YCB or YCB(bop) without training dexycb dataset. Have you already tried this?? If yes, Can you share the results?? It would be really helpful. -> We don't have pre-trained results. It should be possible to get that for PoseCNN using their released repo. You need to regenerate the results with the pre-trained model and then run eval (see here).
Q3. Is there any reason a large clamp was removed compared to the YCB dataset??
-> We did not include 051_large_clamp
since it it sufficiently similar to 052_extra_large_clamp
.
Thanks for the detailed reply!
Is there goanna be a training code release for hand pose estimation?
Thanks