Open str4w opened 6 years ago
Same here. It must be a rendering issue because I see the same thing (first batch norm layer connected to all other batch norm layers) on totally different networks built long time ago. Also, in models trained without batch normalization, instead, the first dropout layer appears to be connected to all other dropout layers.
Just encountered the same problem today, and after some juggling I think I have found the true root of this behaviour.
As far as I understand it, it's not per se wrongfully linked; it looks like it's about intrinsic optimizations or such.
Keras uses Batch Normalization (and perhaps Dropout) layer realizations that depend on flag learning_phase
(a boolean value)—because they should work differently while fitting and while evaluating. And it looks like this flag is stored as input in the first layer that uses it; and even if we set it to false manually (e.g., calling keras.backend.set_learning_phase(0)
), it is still used for some primary flag calculations, that are propagated from there to all of other layers of similar internal structure.
You could see that, I guess, just by observing the structure of batch norm layer.
True question is, how do we disable that flag propagation? Is it even possible to at least make it reducable by optimization (i.e., for manual setting learning_phase
to zero—in that case the input to that keras_learning_phase
node is static—and the flags that are propagated could be static as well)?
Or, to formulate the question to be more related to this repo—is it possible for TensorBoard to automatically hide these Keras-specific autogenerated links that are unrelated to the actual logic of the graph?
I think I have the same issue here
I think I had the same problem when converting a tensorflow.model to a keras model.
I also have the problem, that using keras BatchNormalization produces 4 nodes in the Functions graph in tensorboard for each BatchNormalization. Having a keras model with just a few BatchNormalization layers results in Tensorboard Graphs being extremely slow and buggy - I suspect because of all the Functions nodes the rendering takes too long.
It would be nice if the Functions graph is hidden per default.
For example this:
input_shape = (128, 128, 1)
inp = tf.keras.layers.Input(input_shape)
bn = tf.keras.layers.BatchNormalization()
out = bn(inp)
model = tf.keras.models.Model(inputs=inp, outputs=out)
model.compile(optimizer="Adam", loss="mse")
output_shape = out.shape
feature_batch = np.zeros([1] + list(input_shape))
label_batch = np.zeros([1] + list(output_shape)[1:])
log_dir = "/tmp/tensorboard_model/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir)
model.fit(feature_batch, label_batch, callbacks=[tensorboard_callback])
results in
When using tensorflow.keras to create a model, the first batchnormalization layer appears to be connected to all other batch normalization layers in the graph. I think this is rendered incorrectly rather than built incorrectly but have not been able to prove that.
Code follows that builds the same model with pure tensorflow, and with tensorflow.keras, as well as the graph rendered by tensorboard in each case.
This issue is probably related to this unanswered StackOverflow post: https://stackoverflow.com/questions/52586853/batchnormalization-nodes-wrongfully-linked-with-each-other and possibly related to this tensorflow issue: https://github.com/tensorflow/tensorflow/issues/17985
Graph produced by pure tensorflow
Graph produced with keras model
Tensorflow code
Keras equivalent
pinging @nuance-research