ucbdrive / hd3

Code for Hierarchical Discrete Distribution Decomposition for Match Density Estimation (CVPR 2019)
BSD 3-Clause "New" or "Revised" License
204 stars 31 forks source link

Discrepancies model_zoo models vs. paper #22

Closed hofingermarkus closed 4 years ago

hofingermarkus commented 4 years ago

Hi and thank you for sharing your wonderful work!

I was trying to reproduce the results you report on Kitti with the hd3fc_chairs_things_kitti-bfa97911.pth model from the Modelzoo. For this I used your inference script together with Kitti Training GT. While it matches quite nice for Kitti2012 I am getting some discrepancies for Kitti2015 that I can't explain.

The Inference script reports an Average End Point error of 1.40 pixels (paper states 1.31). And when I use the C++ code from the Kitti Homepage together with the Kitti2015 training files I get 4.43% of Fl-all error (paper states 4.1%).

Is this the actual model you've used to obtain the numbers reported in your paper or a retrained one? Am I doing something wrong?

Best Regards Markus

yzcjtr commented 4 years ago

Hi Markus,

Thanks for your interest. No, this is not exactly the model we used to report the numbers in the paper, because I rewrote most of the code to make it more elegant and tractable after paper acceptance. The provided models were retrained with the new codebase. I tested its performance on the KITTI test sets and found it was pretty similar to our paper result (both in the metric and the rank). Thus I think my reimplementation is fine.

Also, please note that subtle differences might happen for different random factors, including random seeds, some PyTorch ops are non-deterministic (e.g. torch.tensor.scatteradd). Training data on KITTI is also very scarce, making the final performance fluctuate sometimes.

If possible, I would suggest you retrain all the models by yourself for specific purposes.