ifzhang / FairMOT

[IJCV-2021] FairMOT: On the Fairness of Detection and Re-Identification in Multi-Object Tracking
MIT License
4.03k stars 933 forks source link

How to run in nvidia Jetson Xavier NX embedded platform #191

Open gongdalinux opened 4 years ago

gongdalinux commented 4 years ago

Thanks for your nice work. I want to run in nvidia nx. And I have converted pth model to onnx, can I convert onnx to tensorrt correctly? Support all ops?

GeekAlexis commented 4 years ago

Thanks for your nice work. I want to run in nvidia nx. And I have converted pth model to onnx, can I convert onnx to tensorrt correctly? Support all ops?

I'm also interested in deploying the model with TensorRT. In its current state, the model probably doesn't run in real-time on jetson devices. Have you tried converting it using trtexec?

gongdalinux commented 4 years ago

@GeekAlexis Now I am trying to convert the pth model to tensorrt, (https://github.com/dlunion/tensorRTIntegrate) this repo is a good example.

austinmw commented 4 years ago

@gongdalinux did you make any progress with the DCNv2 onnx-parser plugin?

gongdalinux commented 4 years ago

@austinmw DCNv2 onnx-parser plugin, you can refer this repo (https://github.com/dlunion/tensorRTIntegrate)

alberto139 commented 4 years ago

I'm also trying to do something similar.

The model without any modifications runs at .2 FPS on a Jetson Nano Converting the model to half precision gets you to .4 FPS. I would expect a Jetson Xavier NX to be a little faster.

I'm currently looking into https://github.com/NVIDIA-AI-IOT/torch2trt to optimize the model further. I attempted to use torch.quantization but saw that it's currently only supported on CPU.

I'll update on this issue if I have any success..

dysdsyd commented 4 years ago

I am also trying to deploy it on an Nvidia device, please let me know if you find something. Also with https://github.com/NVIDIA-AI-IOT/torch2trt I am getting some errors while converting the HrNetV2.

dysdsyd commented 4 years ago

@alberto139 were you able to convert the model into TensorRT?

alberto139 commented 4 years ago

@dysdsyd No luck just yet.

I did manage to get around ~1.4 FPS on the Jetson Nano by quantizing using the native PyTorch half() function and making the image size smaller at inference time by changing opts.py by factors of 32. I would expect at Jetson Xavier NX to run at around 5 FPS with those modifications. I'm actually waiting to get one delivered.

As far as TensorRT goes, I haven't made any progress after my initial attempts at converting the model using torch2trt. Converting to ONNX first and then to TensorRT might be a more successful approach but I haven't attempted it yet.

UPDATE

I got the model running on a Jetson Xavier NX at 5 FPS with a smaller input image during inference and quantized to half precision.

Ashwin-Ramesh2607 commented 4 years ago

@alberto139 That's great! Could you give an outline for the entire procedure, on model conversion and preparing the scripts on tensorrt?

GeekAlexis commented 4 years ago

Hi guys, if you are looking for a highly optimized multiple object tracker with TensorRT acceleration, here is my implementation: https://github.com/GeekAlexis/FastMOT. Let me know if you can get it to run and please star the repo! The FPS on my Xavier NX is 20+ on average.

ghost commented 4 years ago

@dysdsyd No luck just yet.

I did manage to get around ~1.4 FPS on the Jetson Nano by quantizing using the native PyTorch half() function and making the image size smaller at inference time by changing opts.py by factors of 32. I would expect at Jetson Xavier NX to run at around 5 FPS with those modifications. I'm actually waiting to get one delivered.

As far as TensorRT goes, I haven't made any progress after my initial attempts at converting the model using torch2trt. Converting to ONNX first and then to TensorRT might be a more successful approach but I haven't attempted it yet.

UPDATE

I got the model running on a Jetson Xavier NX at 5 FPS with a smaller input image during inference and quantized to half precision.

Hello,

is it possible to share your ONNX model for FairMOT?

Thanx Fatih.

xjsxujingsong commented 3 years ago

Hi guys, I have ported this Python code to C++ using TensorRT. The simple version without DCN: https://github.com/xjsxujingsong/FairMOT_TensorRT_C The default version with DCN based model: https://github.com/xjsxujingsong/FairMOT_TensorRT

GeekAlexis commented 3 years ago

@xjsxujingsong Cool. How much FPS does it run on Jetson?

xjsxujingsong commented 3 years ago

@GeekAlexis Sorry I dont have that device, but I guess the speed will be similar to that of Python code.

javalier commented 3 years ago

Thanks for your nice work. I want to run in nvidia nx. And I have converted pth model to onnx, can I convert onnx to tensorrt correctly? Support all ops?

hi my friend,I just come into contact with this field.can you show me the code of changing the projec's pytroch model to onnx. Looking forward to.

ayanasser commented 3 years ago

@gongdalinux Hello, How could you convert the FairMot to onnx model ?