Open deshwalmahesh opened 2 years ago
@Ferev this issue seems to be on the TFLite side, perhaps you could reroute it to someone from the TFLite team?
Hi, @deshwalmahesh , could you please share your pretrained maxim model? the colab you pointed (https://github.com/google-research/maxim/blob/main/colab_inference_demo.ipynb) seems broken
Never mind, I got the model. I could reproduce OOM in 12 GB RAM colab runtime environment. let me work on it more.
I've tested in the newly TF version 2.11.0 and the issues gone. could you please test it in the new version of TF in the colab?
I'm using a model for auto image enhancement
google-research/maxim
and it is working perfectly. So I was working with quantization of model and got the answer from the official sources on How to convertJAX
model totflite
and I it worked. Code for theMAXIM
to quantize, I have answered my own question on stackoverflowCode to quantize:
Problem: Everything is fine, the model is being loaded and showing input and output shapes as:
but when I do:
The memory runs out. This is super weird because my original
JAX
model was running quite fine but as soon as I try to allocate memory to the QUANTIZED version, I get this one. Any idea why this might be happening?