Closed kelvinxuande closed 2 years ago
Hi, thank you for the interest!
You can find some information about inference speed in issues #48 and #50 . Inference speed is influenced by many factors and I think it is not easy to compare different methods without trying them. For example, some key steps of this method are: image resizing, person detection (gpu), non-maximum suppression, detections resizing, network prediction (gpu), non-maximum suppression. Only few of them are performed on gpu.
I'm sorry I don't know the DenseNet/ResNet models you are referring to and I don't have a NVIDIA Jetson at the moment so I can't directly try them.
If you have a Jetson, you may try HRNet (and Yolo if you need multi-person support) on it using the converter https://github.com/NVIDIA-AI-IOT/torch2trt . Please let me know if you give it a try!
Hi, amazing project and many thanks for continuing to update the repository!
While I've noticed a considerable increase in accuracy as compared to more dated models and implementations, not much have been said about the inference speed (in terms of FPS perhaps?). Can HRNet models be optimised with TensorRT and any ideas how would inference speeds compare with DenseNet/ ResNet models, such as the implementations found here: NVIDIA-AI-IOT/trt_pose?