DerryHub / BEVFormer_tensorrt

BEVFormer inference on TensorRT, including INT8 Quantization and Custom TensorRT Plugins (float/half/half2/int8).
Apache License 2.0
430 stars 71 forks source link

Issue trying to deploy model on TensorRT #105

Closed gcunhase closed 3 months ago

gcunhase commented 3 months ago

Error

TensorRT 10.0.1.6, RTX 3090:

[07/26/2024-23:48:35] [E] Error[9]: /MultiScaleDeformableAttnTRT: could not find any supported formats consistent with input/output data types
[07/26/2024-23:48:35] [E] Error[9]: [pluginV2Builder.cpp::reportPluginError::23] Error Code 9: Internal Error (/MultiScaleDeformableAttnTRT: could not find any supported formats consistent with input/output data types)

Steps to reproduce

  1. Prepare docker container as instructed in this repo.
  2. Modify the torch to ONNX export script to export the simplified model (otherwise an error gets thrown related to the Slice layer):
    $ perl -pi -e 's/keep_initializers_as_inputs=True/keep_initializers_as_inputs=False/g' det2trt/convert/pytorch2onnx.py
    $ perl -pi -e 's/do_constant_folding=False/do_constant_folding=True/g' det2trt/convert/pytorch2onnx.py
  3. Export the simplifed ONNX model:
    $ python tools/pth2onnx.py configs/bevformer/plugin/bevformer_tiny_trt_p.py bevformer_tiny_epoch_24.pth --opset=13 --cuda --flag=cp_op13_simp
  4. Try to build the TensorRT engine:
    $ trtexec --onnx=bevformer_tiny_epoch_24_cp_op13_simp.onnx --staticPlugins=libtensorrt_ops.so --fp16

    The error happens here. It also happens without the --fp16 flag and even with --best flag.

gcunhase commented 3 months ago

@DerryHub @firestonelib

gcunhase commented 3 months ago

This is due to the plugin inputs/outputs not having their types defined. After fixing this (manually with ORT+onnx_graphsurgeon), the model can run with trtexec.

Is there a way for the ONNX model to be exported with tensor types already?