tensorflow / model-optimization

A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
https://www.tensorflow.org/model_optimization
Apache License 2.0
1.49k stars 320 forks source link

quantize_model() cannot detect a keras.Sequential model #1144

Open DKMaCS opened 2 weeks ago

DKMaCS commented 2 weeks ago

Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of each technique. For example, the overview page of quantization-aware training is here. An issue for anything not supported should be a feature request.

Describe the bug I'm passing a keras sequential model into quantize_model(), and I'm getting the error that the model isn't a sequential model.

System information

Carried out in: Google Colab

TensorFlow version (installed from source or binary): 2.17.0

TensorFlow Model Optimization version (installed from source or binary): 0.8.0

Python version: 3.10.12

Describe the expected behavior prepare a keras sequential model built from scratch that was imported via load_model()

Describe the current behavior ValueError: to_quantize can only either be a keras Sequential or Functional model.

Code to reproduce the issue !pip install tensorflow_model_optimization

import pandas as pd import numpy as np import time import tensorflow as tf import os import tempfile import keras import tensorflow_model_optimization as tfmot from google.colab import drive from tensorflow.keras.models import load_model

drive.mount('/content/drive') %cd /content/drive/My Drive/CS528/HW3

model = load_model('/content/drive/My Drive/CS528/HW3/q1_model.keras')

quant_aware_model_tflite = '/content/drive/My Drive/CS528/HW3/s_mnist_quant_aware_training.tflite' quantize_model = tfmot.quantization.keras.quantize_model q_aware_model = quantize_model(model)

Screenshots If applicable, add screenshots to help explain your problem.

Additional context I've already checked the same compound conditional using the imported model just before calling quantize_model(), and it behaves as it should. Only when quantize_model() is actually handling the model does it seem to think the model isn't a tf.keras.Sequential object.

DKMaCS commented 2 weeks ago

q1_model.keras was made beforehand using a sequential model with common layers using the save_model() function from tensorflow.keras.models

pedrofrodenas commented 2 weeks ago

The problem is that you are using Keras3. To avoid this problem, download tf_keras library that match your tensorflow version. import tf_keras as keras and set up os.environ["TF_USE_LEGACY_KERAS"] = "1"

MEHDI342 commented 2 days ago

you could instead of importing the layers and sequential package this way`:
from keras.layers import LSTM, Dense from keras.models import Sequential

do it this way instead:

from tensorflow.keras.layers import LSTM, Dense, but for the sequential package you could just do a : import keras from keras import layers from keras import ops i think the import sequential is just a bug from gpt and claude , the layers and ops package are sufficient