sony / model_optimization

Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.
https://sony.github.io/model_optimization/
Apache License 2.0
295 stars 49 forks source link

MCT quantization for EfficientDet custom model #1120

Closed murabaya closed 1 month ago

murabaya commented 2 months ago

Issue Type

Feature Request

Source

pip (model-compression-toolkit)

MCT Version

1.11.0

OS Platform and Distribution

ubuntu 20.04.1

Python version

3.10

Describe the issue

I have a custom model (saved model) of EfficientDet. I want to quantize this model using MCT and convert it into a keras, tflite or onnx model for imx500 (covpy, uni-tensorflow or uni-pytorch). Although there is an example code named example_keras_effdet_lite0.ipynb in the notebooks, it is not helpful. Could you please provide specific code or methods on how to achieve this? Thank you very much.

Expected behaviour

N/A

Code to reproduce the issue

N/A

Log output

N/A
ofirgo commented 1 month ago

Hi @murabaya, Thank you for bringing this issue to our attention.

Can you please provide more details on what you are trying to run? Are you trying to quantize the model from the tutorial or a different efficientdet model? Did you change anything in the tutorial code for your execution? Can you provide the execution logs or details about the exact problem you're having?

Thanks, Ofir

murabaya commented 1 month ago

Hi Ofir

Thank you for the replay to my question. As I mentioned before in the issue page, (1) I have a efficientdet custom medel which is the tensorflow saved model(saved_model.pb). That is, the model is not tutorial model.
Now I have the model converted to the following models:

   my_custom_model.tflite (float model)
   my_custom_model.onnx   (opset16 onnx model)

I would like to quantize any of these models with MCT.

(2) I refered to the example_keras_effdet_lite0_for_imx500.ipynb in the tutorial notebooks, As for the efficientdet model, the MCT web page only has this example code. However, this example code is not helpful because it applies to a pre-trained efficientdet lite torch model. For example, the code contains the following:

   ************************************************************************
   Keras model
   Create the Keras model and copy weights from pretrained PyTorch weights file. Saved as "model.keras".

   model_name = 'tf_efficientdet_lite0'
   config = get_efficientdet_config(model_name)
   model = EfficientDetKeras(config, pretrained_backbone=False).get_model([*config.image_size] + [3])
   ************************************************************************

 If model_name is replaced as follows:
    model_name = 'saved_model.pb'
    or
    model_name = 'my_custom_model.tflite'
    or
    model_name = 'my_custom_model.onnx'

 Needless to say, the example code will not work 
because it is applied to a pre-trained efficientdet model.
I don't know how to quantize efficientdet "my_custom_model" with MCT.
If possible, I would like to know some example code to quantize a efficientdet custom model.

thank you, Murabayashi

ofirgo commented 1 month ago

Hi @murabaya ,

Regarding (1) - MCT does not support input models in tflite or onnx, so unfortunately, these models can't run in MCT. Regarding (2) - The model in the tutorial is a Keras model. The torch code in this tutorial is only used for evaluation and will be removed from the Keras tutorial soon.

So to summarize, if you have a Keras model of your custom effdet model you should be able to run it through MCT as it explained in the tutorial. Let me know if you have any other questions regarding this topic.