Open parth-desai opened 9 months ago
Thanks for filing this issue, Parth.
As you said, it looks like RNN was disabled as it was unsupported and yet to be verified on TFLite.
We'll be keeping track of this feature request, but please note that LSTM / RNN / GRU varient support is not prioritized at this moment because it is less relevant to today's ML landscapes compared to transformers.
Thanks, Jen
System information
Motivation I am trying to train RNN model with quantization aware training for embedded devices.
Describe the feature I am looking for a way to train with default 8bit weights & activations quantization using
quantize_apply
API without passing in custom config.Describe how the feature helps achieve the use case
Describe how existing APIs don't satisfy your use case (optional if obvious)
I tried to use
quantize_apply
API but I received this error.RuntimeError: Layer gru:<class 'keras.src.layers.rnn.gru.GRU'> is not supported. You can quantize this layer by passing a `tfmot.quantization.keras.QuantizeConfig` instance to the `quantize_annotate_layer` API.
After using
quantize_annotate_layer
, I was able to train the model but Model fails to save with following error:I used following
QuantizeConfig
I looked at the source code. It seems that the support for RNN is disabled here for some reason.
I was wondering if this can be enabled back?