Open CarlatBuffalo opened 3 years ago
One solution is to use post-training quantization which I'm trying now, tho it'll affect the accuracy of the model.
Another possible way out is to adjust data type in the model to be integers, but then what about all parameters (weights, activations/inputs)?
The quantization aware training fails because tf lite does not support some layers, such as conv1d, maxpool1d...
Error message: