Closed surfii3z closed 4 years ago
I'm glad you like our repository, hopefully it will be useful for you and your research!
The numbers we report are based on TensorRT implementations, that should be why you are observing slower inference speeds. NVidia has a tutorial where they discuss converting PackNet using TensorRT, maybe you can find more information there:
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html
I'm glad you like our repository, hopefully it will be useful for you and your research!
The numbers we report are based on TensorRT implementations, that should be why you are observing slower inference speeds. NVidia has a tutorial where they discuss converting PackNet using TensorRT, maybe you can find more information there:
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html
Thank you for your kind suggestion. I would like to try. I will update you with the result.
Hi,
I am really excited with your work and thanks for sharing with us an amazing software.
I read in the paper that you guys can archive 60 ms performance with Tesla V100 which is impressive.
However, when I tried to use "PackNet, Self-Supervised Scale-Aware, 384x1280, CS → K" with Tesla V100 in docker environment, it infers each image in ~500 ms. While "PackNet, Self-Supervised Scale-Aware, 192x640, CS → K" did better at ~ 150 ms.
I would like to know what model that you employed to get the 60 ms performance. And how could I run the inference with TensorRT as mention in the paper.
Best Regards,
Jedsadakorn