DerryHub / BEVFormer_tensorrt

BEVFormer inference on TensorRT, including INT8 Quantization and Custom TensorRT Plugins (float/half/half2/int8).
Apache License 2.0
425 stars 69 forks source link

How to use a different yolox weight other than the one provided to do quantization in the context? #64

Open NutshellLee opened 1 year ago

NutshellLee commented 1 year ago

What are the modifications needed for other yolox or yolov8 weights to work in the 2D quantization task?

NutshellLee commented 1 year ago

I tried yolox_x_fast_8xb8-300e_coco_20230215_133950-1d509fab.pth from mmyolo github, yolox-s.pth from yolox official github. All of them give "The testing results of the whole dataset is empty.", after running "trt_evaluate_fp16.sh" for instance. I googled the error, and solution seems to be "reduce the learning rate". But how exactly could i do that for our case, our quantization task?