...
iteration 97, cost -5.999
iteration 98, cost -6.013
iteration 99, cost -6.013
Writting network output of training set to neuralnets/trainout_nnet_save__convnet_iter100.hdf...
Saving the trained net to neuralnets/nnet_save__convnet_iter100.hdf...
Done
Total time: 2589.5 s
real 43m12.468s
user 34m6.747s
sys 12m38.237s
When using the model.fit, the training will be much faster, and I can get almost the same results (like the loss, the network...)
Epoch 97/100
392/392 [==============================] - 4s 10ms/step - loss: -6.0035
Epoch 98/100
392/392 [==============================] - 4s 10ms/step - loss: -6.0049
Epoch 99/100
392/392 [==============================] - 4s 10ms/step - loss: -5.9938
Epoch 100/100
392/392 [==============================] - 4s 10ms/step - loss: -6.0119
Writting network output of training set to neuralnets/trainout_nnet_save__convnet_iter100.hdf...
Saving the trained net to neuralnets/nnet_save__convnet_iter100.hdf...
Done
Total time: 450.5 s
real 7m32.213s
user 5m22.877s
sys 1m22.681s
I can even set the callback parameter in model.fit() to set up some learning rate scheduler to improve the training process. So I think it is better to use model.fit instead of the cycle of model.train_on_batch.
I found out that the
e2tomoseg_convnet.py
could be improved by usingmodel.fit
. Training will be six times faster.For example, when I ran the following command:
I could get the output as follows:
When using the
model.fit
, the training will be much faster, and I can get almost the same results (like the loss, the network...)I can even set the
callback
parameter inmodel.fit()
to set up some learning rate scheduler to improve the training process. So I think it is better to usemodel.fit
instead of the cycle ofmodel.train_on_batch
.