uber-research / DeepPruner

DeepPruner: Learning Efficient Stereo Matching via Differentiable PatchMatch (ICCV 2019)
Other
354 stars 41 forks source link

Inference speed is much faster than reported in the paper? and the EPE performance is worse? #35

Closed wuzhongwulidong closed 2 years ago

wuzhongwulidong commented 3 years ago

Great work! However, I encounter a very confusing problem. As reported in the paper , the inference time of DeepPruner-Best model on SceneFlow test set is 182ms using Titan Xp and EPE=0.86. But when I run the provided model, I find the inference time is about 43ms and EPE=1.037. Big gap! So, I doubt that the released code of DeepPruner-Best model is somehow simplified so that the inference speed is faster and the EPE performance is worse. Is it the case? And how can I reproduce the inference time and EPE performance in the paper? Thanks!

ShivamDuggal4 commented 2 years ago

Hi @wuzhongwulidong

Thanks for the interest in our work! That shouldn't be the case, since multiple people have now reproduced the inference/ EPE numbers. Can you help me debug by answering the following (if the issue still persists):

Best Regards Shivam

wuzhongwulidong commented 2 years ago

@ShivamDuggal4 Thanks for sharing your great work. And Thanks for you replay!
I have tried but I can not reproduce the same or close results as inference time =182ms and EPE=0.86 on SceneFlow finalpass test set.

To make it easier for debugging, I released the whole testing and inference code, as well as the conda env files coresponding to pytorch0.4.1 and pytorch1.7.1. Please check https://github.com/wuzhongwulidong/DeepPruner_For_Simplicity_Public.git Testing and inference scripts are in ./scripts of the repository. And conda env files are in ./condaEnvFiles I tried but I can not find where the problem is.

As the pictiure shows, EPE is 1.037: image

As the picture shows, the inference time is 47ms: image

My GPU is Titan Xp image

Best regards Wu

ShivamDuggal4 commented 2 years ago

Hi @wuzhongwulidong

I am actually able to reproduce the paper (inference speed) results most times I run on TitanXP. I am closing the issue because of inactivity, but feel free to reopen it if still unsolved.

Best Regards Shivam