Closed i3abghany closed 4 months ago
PS: I am aware of this issue: https://github.com/mlcommons/tiny/issues/110. The author seems to have had a similar problem, but there is no solution on the issue page.
Hello,
I managed to get results with int i/o models, similar to float i/o models.
How I did it:
scale, zp = output_details[0]['quantization']
out = output_data.astype(numpy.float32)
out = scale * (out - zp)
I have average AUC 0.8408.
Hope this helps. :)
@i3abghany did you manage to solve the issue based on the suggestions above? If so I would close the issue.
Hello,
I am trying to run inference for the Anomaly Detection benchmark against the model with weights, activations, inputs, and outputs quantized. I am getting totally off results for the average AUC.
I changed nothing but the input handling before inference as the data have to be scaled down and converted to np.int8 (just like other benchmarks). Here's the code for that:
The
data
parameter comes from the untouched inference code in03_tflite_test.py
andmodel_path
istrained_models/model_ToyCar_quant_fullint_micro_intio.tflite
.The average AUC is 0.5564.
The same exact code (without re-scaling the input data type) works for the
trained_models/model_ToyCar_quant_fullint_micro.tflite
model.I tried to scale the input representative dataset using the following code in the conversion script:
However, this makes the average AUC even worse: 0.4605.
Any hints would be appreciated, Thanks