DerryHub / BEVFormer_tensorrt

BEVFormer inference on TensorRT, including INT8 Quantization and Custom TensorRT Plugins (float/half/half2/int8).
Apache License 2.0
430 stars 71 forks source link

Can't find BEVFormer_ptq_max model #72

Closed Oh-Cest-la-vie closed 1 year ago

Oh-Cest-la-vie commented 1 year ago

Hi, thanks for your great work. If you can tell me where I can download bevformer_r101_dcn_24ep_ptq_max.pth for convert to ONNX and engine of qdq model ?

DerryHub commented 1 year ago

You can get the pth by samples/bevformer/base/quant_max_ptq.sh

Oh-Cest-la-vie commented 1 year ago

Yes, I got it, thanks for your reply !