Closed RGring closed 3 years ago
some more details and screenshots might help
@RGring Only 60ms and 100ms !?! what was your image size that you considered ?
So which model is best suitable for Jetson Devices ? YOLOX-tiny or YOLOv4-tiny ?
The paper also report the speed only on V100 where as YOLOv4-tiny is reported to run at 32 fps on Jetson Xavier AGX
I don't know, what leaded to these results. After averaging on a larger amount of images (640 x 640), I need to correct my reported values. YOLOX-tiny: 21.4 ms YOLOX-nano: 18.9 ms So no huge speed-up, but both quite fast. Closing the issue and sorry about the misleading post!
@RGring Wow, then its nearly 50 frames per second. Great n Thanks for sharing.
hi,May I ask which version of yolox_nano weights you are using? I use the 0.1.0 version of yolox_nano. The result of tensorrt C++ inference is problematic. What is going on? @RGring
I used these weights(and re-trained on my data). https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano.pth. Why is inference problematic? I used the python with the torch2trt-package.
I have no problem using python reasoning, but the result of using c++ reasoning is wrong. There may be a problem with the preprocessing logic.
@zzzfo @RGring Hello guys. Have you solved these issues?
I try to run yolox-nano with y using TRT and deepstream on a Jetson Xavier and I do not seem to be able to do it.
Yolox-s runs at 1 fps, but it runs. When trying to run tiny or nano, either my terminal freezes or I get a segmentation fault.
I convert everything the same way with torch2trt and when checking the trt file with the demo.py from the original repo, it does work, so the modle_trt.pth works.
Any ideas why this is happening?
After comparing the inference speed of YOLOX-nano and YOLOX-tiny on the Jetson NX using tensorrt, I realized that YOLOX-tiny is faster (tiny: 60 ms, nano: 100ms).
Is there an explanation for that?