Tencent / PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
https://pocketflow.github.io
Other
2.79k stars 490 forks source link

mobilenetv2 faster rcnn uniform-tf #299

Open zyxcambridge opened 5 years ago

zyxcambridge commented 5 years ago

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "/opt/project/main.py", line 51, in main learner = create_learner(sm_writer, model_helper) File "/opt/project/learners/learner_utils.py", line 60, in create_learner learner = UniformQuantTFLearner(sm_writer, model_helper) File "/opt/project/learners/uniform_quantization_tf/learner.py", line 98, in init self.__build_train() File "/opt/project/learners/uniform_quantization_tf/learner.py", line 184, in __build_train scope=self.model_scope_quan) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/quantize_graph.py", line 197, in experimental_create_training_graph scope=scope) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/quantize_graph.py", line 70, in _create_graph is_training=is_training) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 53, in FoldBatchNorms graph, is_training, freeze_batch_norm_delay=freeze_batch_norm_delay) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 98, in _FoldFusedBatchNorms freeze_batch_norm_delay=freeze_batch_norm_delay)) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 338, in _ComputeBatchNormCorrections match.moving_variance_tensor + match.batch_epsilon) TypeError: unsupported operand type(s) for +: 'NoneType' and 'float'

https://github.com/tensorflow/tensorflow/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+match.moving_variance_tensor+%2B+match.batch_epsilon

Quantization is only support for SSD models right now. tf.layers.batch_normalization( ) replace slim.batch_norm can work ?