Closed ydnaandy123 closed 6 years ago
If you look at this url, when you use tf.layers.batch_normalization, you have to define update_ops
However, there were the following issues (Only in my case... NOT all people)
So I used batch_norm in contrib If you do updates_collections = None, the code is much simpler and easier to understand. Also, the performance has been increased, and if you gave None, it's the same as doing control_dependencies above.
thank you
Oh wow! Many thanks for your detailed explanations! You are so friendly~ I'm surprised that the performance will be affected! Thanks for your sharing anyway.
BTW, for the Global_Average_Pooling I use tf.reduce_mean(input_tensor=x, axis=[1, 2], keep_dims=True) What do you think~
Oh good idea. Also, using reduce_mean is global_average_pooling.
Wow excellent work! Thanks for sharing~ Just a quick question: is there any reason why you don't use the batch_normalization in tf.layers(https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization)? I'll be glad if you reply me~ thank you!