I tested multiple models of deepstream-yolo, and the effect of int8 can be said to be very poor. I also tested qat on the classification model and it worked perfectly. If I generate trt weights using a project like yolov8-qat, can I switch directly to deepstream-yolo?
QAT truly brings significant innovation to Model Quantization. Here's an implementation of YOLOv9 with nearly zero precision loss and a substantial reduction in latency.
I tested multiple models of deepstream-yolo, and the effect of int8 can be said to be very poor. I also tested qat on the classification model and it worked perfectly. If I generate trt weights using a project like yolov8-qat, can I switch directly to deepstream-yolo?