Open publioelon opened 2 years ago
@Xhark could you take a look at this issue?
Any updates on this?
Any update on this ?
Can you guys try to apply quantize to dense layer only?
I successfully QATed the model with the code below.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
annotated_model = tf.keras.models.clone_model(
model,
clone_function=apply_quantization_to_dense,
)
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
I found a workaround for this problem as I had the same issue. Basically, After reading the transfer learning model e.g.
import tensorflow as tf
model_pre = tf.keras.applications.MobileNetV2(
include_top=False,
input_shape=(299, 299, 3),
pooling='avg',
weights='imagenet'
)
for layer in model_pre.layers:
layer.trainable = False
you need to convert it into a keras model through
inputs = model_pre.input
outputs = model_pre.output
concatenated_model = tf.keras.Model(inputs=inputs, outputs=outputs)
then you can use the method @mhyeonsoo used to add quantization aware labels to the TF model. Note that using it directly without the previous approaches will not quantize the layers in the transfer learning model.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
annotated_model = tf.keras.models.clone_model(
model,
clone_function=apply_quantization_to_dense,
)
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
You can verify that your model is quantization aware from the model summary.
Hello, I have a MobileNetV2 That I am trying to use for image classification by means of transfer learning, although apparently seems to not work. Initially, I perform transfer learning on my model as follows:
I then followed the steps here to perform model quantization: https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/quantization/training_example.ipynb#scrollTo=oq6blGjgFDCW
As the above, I attempted a quantization aware training on my model like this:
It gives me the following error:
ValueError: Quantizing a tf.keras Model inside another tf.keras Model is not supported.
Then I tried what was in this link: https://github.com/tensorflow/model-optimization/issues/377#issuecomment-820948555
I tried doing the below attempt:
Which outputs the following error:
ValueError: Unable to clone model. This generally happens if you used custom Keras layers or objects in your model. Please specify them via
quantize_scope
for your calls toquantize_model
andquantize_apply
. It seems I haven't fully understood how to get quantization aware training done correctly. I'd like to request help on how to properly do QAT on a transfer learning model such as the above example?