Tricky bug, keras.layers.Embedding has started overriding save_own_variables/load_own_variables for quantization. We can no longer rely on absolute indexing for the reverse embedding, e.g. self.reverse_embeddings.assign(store["1"])
I don't think we have a direct need for this functionality today, so I just removed it, but could be worth considering how to bring back in the future.
Tricky bug, keras.layers.Embedding has started overriding save_own_variables/load_own_variables for quantization. We can no longer rely on absolute indexing for the reverse embedding, e.g.
self.reverse_embeddings.assign(store["1"])
I also couldn't figure out a way to continue to support loading tied weights untied. This is because the embedding will now unconditionally error if weight counts are mismatched here: https://github.com/keras-team/keras/blob/v3.2.0/keras/layers/core/embedding.py#L232-L264
I don't think we have a direct need for this functionality today, so I just removed it, but could be worth considering how to bring back in the future.