Open deimsdeutsch opened 5 years ago
Sorry, I have no idea about this error in Int8 mode. However, from my point of view, adopting Int8 quantization may not bring much overall acceleration, since the network inference time is less than other operations such as image crop, resize and NMS
i met this problem as well , have you solved it?
I am using TensorRT 5 and trying to add the code for Int8 Quantization. I tried adding the following lines in baseEngine.cpp but it is giving me an error.
builder->setInt8Mode(true); IInt8Calibrator* calibrator; builder->setInt8Calibrator(calibrator);
WARNING: Int8 mode specified but no calibrator specified. Please ensure that you supply Int8 scales for the network layers manually. ERROR: Calibration failure occured with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.