Deci-AI / super-gradients

Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
https://www.supergradients.com
Apache License 2.0
4.54k stars 496 forks source link

Jetson DLA support for ONNX model #1760

Closed ranjitkathiriya544 closed 5 months ago

ranjitkathiriya544 commented 8 months ago

Hello,

I have convert my model to onnx format, now I want that model to run in jetson using DLA (Deep Learning Accelerator).

The steps followed by me for conversion is:

  1. pth -> Onnx: I have converted my model to INT8 Quantization, tensorrt as a Target Backend, Document was very helpful, thanks!

https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/model_export/models_export.ipynb

  1. Convert this onnx model to .trt format using:

./trtexec --onnx=<onnx model> --saveEngine=<engin file -save path>.trt --explicitBatch --int8 --useDLACore=0 --allowGPUFallback --useSpinWait

if I try to run this script, then

Node: /model/heads/head3/cls_convs/cls_convs.0/seq/act/Relu cannot run in INT8 mode due to missing scale and zero point. This node will not run on DLA.

I am getting this warning for every layers in model, and every layers are loading in GPU, not on DLA.

Thanks in advance for help!

ranjitkathiriya544 commented 5 months ago

Any update on this?

BloodAxe commented 5 months ago

DLA offers very limited support of ONNX layers. And I'm not sure we will be ready to provide a support on this matter as DLA support goes outside SG scope of responsibility.

ranjitkathiriya544 commented 5 months ago

thanks for reply!