lewes6369 / TensorRT-Yolov3

TensorRT for Yolov3
MIT License
489 stars 165 forks source link

Why int8 mode performance worse on jetson tx2? #62

Open jfangah opened 5 years ago

jfangah commented 5 years ago

Thanks for your work! But I'm confuse why the int8 model performance worse on jetson tx2. The inference time of fp32 416 model is about 250ms and the inference time of fp16 416 model is about 200ms, but the inference time of int8 model is about 300ms. I want to know why the int8 model worked on x86 but failed on tx2.

ElonKou commented 5 years ago

It seems that TX2 doesn't support int8,int8 calibration support on TX2

I also test the yolo3-416(fp16) speed on TX2 ,it's about 211ms,The same config performance is about 14ms per image on my GTX1060. Did you have test tiny-yolo3-trt performance on TX2?