tensorflow / model-optimization

A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
https://www.tensorflow.org/model_optimization
Apache License 2.0
1.49k stars 323 forks source link

Average multiple quantized weights of different models #420

Closed AbbasiAYE closed 4 years ago

AbbasiAYE commented 4 years ago

System information

Motivation Enable operations on algebraic operation on the quantized model, help in many applications such as federated learning.

Describe the feature I'd like to be able to access the quantization layers weights (e.g., 1) get, 2) operation, 3) set ), as I do in the following code. This helps in checking the impact of quantization on the federation.

I was thinking to use: 1) get_weights_and_quantizers: this will be done for all models. 2) getAverageModel: this will combine all layers from different models into one. 3) set_quantize_weights: this will replace the master model with an averaged quantized one.

Any suggestions would be very appreciated. Thanks

######### Code ########### def getAverageModel(models): weights = [model.get_weights() for model in models]

new_weights = list()

for weights_list_tuple in zip(*weights):
    new_weights.append(
        [np.array(weights_).mean(axis=0)\
             for weights_ in zip(*weights_list_tuple)])
return new_weights

for e in range(nrofIterations): results = [trainModelPCA(models_FLPCA[client],X_train_CLPCA[client],y_train_CLPCA[client], X_test_CLPCA[client],y_test_CLPCA[client],batchsize=batchSZ,epoch=Local_Epchs) for client in range(clients)]

# If Federated, then we average the models_FLPCA, and updates with new weights
new_weights = getAverageModel(models_FLPCA)
for i in range(clients):
    models_FLPCA[i].set_weights(new_weights)
nutsiepully commented 4 years ago

Hi @AbbasiAYE,

It's not clear to me what you want us to do in this case. As you've done in your code, you can apply an average, or any such modification to the weights in your model as you wish.

It doesn't seem like something QAT has to do. Also, please consider using TF 2. We don't officially support TF 1.