google / qkeras

QKeras: a quantization deep learning library for Tensorflow Keras
Apache License 2.0
533 stars 102 forks source link

How to get Quantized weights after Model Training ? #94

Closed HumzaSami00 closed 2 years ago

HumzaSami00 commented 2 years ago

I am training a model using Qkeras layers and quantizers. I am getting a good accuracy on Test Dataset. But When I checked the model weights using model.get_weights(), They are still not quantized into n-bits. After digging into some closed issues, I came to know that qkeras only quantize weights in feedforward and I have to make them quantized after the complete training. Currently, I am quantizing the weights After the training using qkeras.utils.model_save_quantized_weights(model,"filename"). Is this the Best approach to updating model weights into quantized weights after the training? I have attached a sample code below.

# Define input Layer
inp =tf.keras.layers.Input((100,1))  
#Define Dense Layer
out = QDense(100,kernel_quantizer="quantized_bits(2)",bias_quantizer="quantized_bits(2)")(inp)
# Create Model
model = tf.keras.Model(inp,out)

#--------After Training the model------------#

#Update Model Weights into 2 bit quantization
qkeras.utils.model_save_quantized_weights(model,"weights")
weights = model.get_weigths()
HumzaSami00 commented 2 years ago

@danielemoro @vloncar

vloncar commented 2 years ago

Hi @HumzaSami00 You can use that function to get the quantized weights, or to save them to a filename. For example to retrieve the quantized weights:

quantized_weights = qkeras.utils.model_save_quantized_weights(model)

or to save them to a file:

qkeras.utils.model_save_quantized_weights(model, "quantized_weights.h5")

Note that the format of the returned object between model_save_quantized_weights and model.get_weights() is different.

HumzaSami00 commented 2 years ago

Okay, Got it. For Now, I am using this approach for getting quantized weights after training. After training, I am saving the weights :

        qkeras.utils.model_save_quantized_weights(model, "quantized_weights.h5")

Then I reload the weights into same model using:

       model.load_weights("quantized_weights.h5")