Closed Liuchongpei closed 1 year ago
Hi @Liuchongpei, did you directly evaluate the shape-prior pretrained model on Wild6D testing data? In that case, you should obtain the per-category results similar to the following table,
You could obtain slightly different results since the testing set is further cleaned.
Yes, I used the pretrained model trained on CAMERA+REAL. The above is what we got. The Mug and Camera are similar to yours. But others are much better. And it prints 'Not found the ground truth from ...' when evaluating the Mug and Laptop.
Sorry, I don't know what happened here. How did you obtain the shape-prior results? We follow their official codes to generate the estimation results and only evaluate them on Wild6D with our evaluation script by setting the --only_eval
to True. In addition, did you visualize your results? Here is a visualization result, you could compare it with yours.
We only make some modifications on 'evaluate_wild6d.py'. Here is our visualization result.
@Liuchongpei Could you just extract the poses with the official Shape-prior repo and evaluate with our evaluation script to see if you can obtain similar results?
I'm not sure what causes this difference. I guess the mean shapes used in Shape-prior are different with ours. We select the shape templates from ShapeNet dataset, while Shape-prior estimates the mean shape via a pretrained autoencoder.
Ok,I will have a try later. Did you evaluate NOCS on Wild6D?
@OasisYang Hi, thanks for your great work! I used the evaluation code to evaluate shape-prior on Wild6D and got the separate results of 5 categories. I average 5 categories' results to get the final result. However, it seems much better than that in the paper. I don't know if I did something wrong.