tensorflow / model-optimization

A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
https://www.tensorflow.org/model_optimization
Apache License 2.0
1.49k stars 320 forks source link

How to extract original model's weight from trained QAT model? #977

Open doomooo opened 2 years ago

doomooo commented 2 years ago

System information

TensorFlow version (you are using): 2.8 Are you willing to contribute it (Yes/No): Motivation

How to extract the original model's weight from the trained QAT model? I want to extract the original model's weight and QAT params, so I can set int8 params in TensorRT manually.

doomooo commented 2 years ago

Hi, I wonder how to extract the original model's weight from the trained QAT model? @thaink

doomooo commented 2 years ago

Anyone can help take a look? Thanks! @thaink @rino20

thaink commented 2 years ago

Isn't the original model weights in checkpoint files or variables/ directory?

doomooo commented 2 years ago

Thanks for your reply! @thaink No, the weight is in quant_warp, and the name is changed.

doomooo commented 2 years ago

I can extract the weights with some codes or tricks, but I wonder if there exist official methods. @thaink

thaink commented 2 years ago

I don't think we have an official way for that. This use with TensorRT isn't in our set of use-cases for now.

inho9606 commented 2 years ago

@Xhark could you review this thread? Thanks

WillLiGitHub commented 2 years ago
    model: original model
    model_q: quantized model

    for ly in model.layers:
        name = 'quant_' + ly.name
        if len(ly.variables) > 0:
            lly = model_q.get_layer(name)
            for var in ly.variables:
                for vvar in lly.variables:
                    if var.name == vvar.name:
                        var.assign(tf.Variable(vvar.numpy()))
                        break
doomooo commented 2 years ago

@WillLiGitHub Thanks! I have extracted weights in this way.