Closed larrydemons closed 5 years ago
I think you mean the program seems did not work after one epoch has been done. Well, I think probably it is doing the CraftTensorBoard==>on_epoch_end. You can check which codes took much time. In my experience, you could annotate all lines in "on_epoch_end" except "self.test_model.save_weights(r'weights/***.h5'.format(epoch))" and it will work. : )
it works thx
@larrydemons 您好,我想知道epoch这个问题您是怎么解决得
Total params: 23,126,736 Trainable params: 23,123,856 Non-trainable params: 2,880
line588 ['/job:localhost/replica:0/task:0/device:GPU:0'] Epoch 1/800 1000/1000 [==============================] - 1143s 1s/step - loss: 0.0176
anyone met this problem?