DerryHub / BEVFormer_tensorrt

BEVFormer inference on TensorRT, including INT8 Quantization and Custom TensorRT Plugins (float/half/half2/int8).
Apache License 2.0
430 stars 71 forks source link

can not use onnxsim #51

Open shaoqb opened 1 year ago

shaoqb commented 1 year ago

image

code:

import os import onnx import torch from onnxsim import simplify

onnx_file = "checkpoints/onnx/bevformer_tiny_epoch_24_cp.onnx" onnx_sim_path = onnx_file.replace(".onnx", "_sim.onnx") model = onnx.load(onnx_file) model_sim, check = simplify(model) assert check, "Simplified ONNX model could not be validated" onnx.save(model_sim, onnx_sim_path)

DerryHub commented 1 year ago

The custom plugins are only for TensorRT. They don't support ONNXRuntime.

shaoqb commented 1 year ago

Do you have any experience or suggestion if I want to use onnxsim?

DerryHub commented 1 year ago

You can use the cfg without custom plugins.