Tencent / PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
https://pocketflow.github.io
Other
2.79k stars 491 forks source link

Question on 'uniform_quantization_tf' #68

Closed wangxianrui closed 6 years ago

wangxianrui commented 6 years ago

In the source code of leaners/uniform_quantization_tf/learner.py, using tensorflow's quantization-aware to creat training graph:

tf.contrib.quantize.experimental_create_training_graph(
        weight_bits=FLAGS.uqtf_weight_bits,
        activation_bits=FLAGS.uqtf_activation_bits,
        quant_delay=FLAGS.uqtf_quant_delay,
        freeze_bn_delay=FLAGS.uqtf_freeze_bn_delay,
        scope=self.model_scope_quan)

but I don not find the value of 'input_graph', so I am so confused about how i can work, ... Maybe this is kind of stupid, I am not familiar with tensorflow ==

jiaxiang-wu commented 6 years ago

If the input_graph argument is not specified, its default value, which is "None", will take effect and the default graph will be used. The default graph is defined in the code block starting from here: https://github.com/Tencent/PocketFlow/blob/master/learners/uniform_quantization_tf/learner.py#L142

wangxianrui commented 6 years ago

I just looked at 'quantize_graph.py', so ... , sorry to bother you