Open gaikwadrahul8 opened 4 days ago
This issue originally reported by @DerryFitz has been moved to this dedicated repository for ai-edge-torch to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.
We appreciate your understanding and look forward to your continued involvement.
1. System information
I am attempting to convert a QAT model trained with int8 weights, int16 activations to a tflite model using tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8. Unfortunately there is an issue with converting the model using this opset. Minimal code to reproduce the error:
This yields the following error:
Doing the same process but in int8 yields no errors: