scicafe / scicafe.github.io

scicafe blog
https://sci.cafe
GNU General Public License v3.0
1 stars 0 forks source link

blog/quantization-1 #15

Open suriyadeepan opened 4 years ago

suriyadeepan commented 4 years ago

Lets see if this shows up!

mipsan commented 3 years ago
basemodel = MobileNetV2(weights="imagenet", include_top=False,
        input_tensor=Input(shape=(224, 224, 3)))

basemodel.trainable = False

#------------------
# MobileNetV2
flatten = basemodel.output
bboxHead = Flatten(name="flatten")(flatten)
bboxHead = Dense(128, activation="relu")(bboxHead)
bboxHead = Dropout(0.1)(bboxHead)
bboxHead = Dense(4, activation="sigmoid")(bboxHead) 

instead of softmax of create_model(), I use sigmoid to get the result in [0..1]. However, when I execute quantize_model to quantize the model, it returns following error:

ValueError: Only some Keras activations under tf.keras.activationsare supported. For other activations, useQuantizerdirectly, 
and update layer config usingQuantizeConfig.

I cannot find out how to resolve it at all. I tested quantize_annotate_layer to quantize only the dense layer, but failed.

Any idea about this?

Thanks

mipsan commented 3 years ago

I've cleared this issue with modification model_optimization library.