Closed spacycoder closed 3 weeks ago
@spacycoder You'll need an additional configuration item qat_config = {"disable_requantization_for_cat": True}
when using quantize_target_type="int8"
in TFLiteConverter (when you have qat_config {"per_tensor": True}
, which is the default). This one is not documented, but I don't know where to put that.
Ok, thanks!
Hi, I'm having some issues converting a model when using "int8" as the target type. This is the error I get when run the model with tensorflow after conversion:
I can reproduce the issue with this code:
and running the model like this: