Closed wuzhongwulidong closed 2 years ago
Hi @wuzhongwulidong
Thanks for the interest in our work! That shouldn't be the case, since multiple people have now reproduced the inference/ EPE numbers. Can you help me debug by answering the following (if the issue still persists):
DeepPruner-fast model
instead of DeepPruner-best model
. The inference time and EPE you are getting are pretty close to DeepPruner-fast version.Best Regards Shivam
@ShivamDuggal4 Thanks for sharing your great work. And Thanks for you replay!
I have tried but I can not reproduce the same or close results as inference time =182ms and EPE=0.86 on SceneFlow finalpass test set.
To make it easier for debugging, I released the whole testing and inference code, as well as the conda env files coresponding to pytorch0.4.1 and pytorch1.7.1. Please check https://github.com/wuzhongwulidong/DeepPruner_For_Simplicity_Public.git Testing and inference scripts are in ./scripts of the repository. And conda env files are in ./condaEnvFiles I tried but I can not find where the problem is.
As the pictiure shows, EPE is 1.037:
As the picture shows, the inference time is 47ms:
My GPU is Titan Xp
Best regards Wu
Hi @wuzhongwulidong
I am actually able to reproduce the paper (inference speed) results most times I run on TitanXP. I am closing the issue because of inactivity, but feel free to reopen it if still unsolved.
Best Regards Shivam
Great work! However, I encounter a very confusing problem. As reported in the paper, the inference time of DeepPruner-Best model on SceneFlow test set is 182ms using Titan Xp and EPE=0.86. But when I run the provided model, I find the inference time is about 43ms and EPE=1.037. Big gap!
So, I doubt that the released code of DeepPruner-Best model is somehow simplified so that the inference speed is faster and the EPE performance is worse.
Is it the case? And how can I reproduce the inference time and EPE performance in the paper? Thanks!