marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 357 forks source link

An error occurred with int8 #328

Open chrichard opened 1 year ago

chrichard commented 1 year ago

I use deepstream 6.2 and yolov8, and when I convert to int8, the conversion process is normal. But at the time of object detection, no targets were detected. In the DeepStream 6.2 environment, other conversions are normal, such as YOLO8 to FP16, or YOLO7 to INT8.

chrichard commented 1 year ago

Today I updated deepstream-yolo. Further testing found that in the Deepstream 6.2 environment, if batch_size=1, the int8 model can output the detection results. However, when batch_size=16, there is no detection box output.

botjeboon commented 1 year ago

I'm having the exact same problem, everything runs fine but no output. I didn't change the batch_size (so it's still set to 1). You're saying that it didn't work when you had batch_size=16, but it did work when batch_size=1?

Was there anything else you might have changed?

chrichard commented 1 year ago

If it is FP16 mode, batch_size whether it is 1 or 16, it is normal。But in int8 mode, batch_size only 1 is normal

marcoslucianops commented 1 year ago

The DeepStream is a bit confusing about batch-size. According to NVIDIA, batch-size should be equal to the number of sources on primary gie and streammux.