Open yang-fei opened 4 years ago
This measure method is incorrect. Since pytorch is a kind of parallel program that perform on GPU. Only using time.time() in standard library would ignore the asynchrony of that. You can refer this artcle: https://towardsdatascience.com/the-correct-way-to-measure-inference-time-of-deep-neural-networks-304a54e5187f
I run the code and tested images. Averagely it cost 200ms to test one image in size of 1280x960, which is far slowly than the paper said. I tested GTX1080TI, GTX TITAN X, GTX2080TI. How could you get so fast inference speed as your paper stated?