It seems like the last TensorFlow upgrade broke the training procedure. When started in the default configuration, training just hangs after the first 10 epochs. This is the break where training should continue with the next 10 epochs but a different learning rate.
It seems like the last TensorFlow upgrade broke the training procedure. When started in the default configuration, training just hangs after the first 10 epochs. This is the break where training should continue with the next 10 epochs but a different learning rate.