Open wjj001-dl opened 4 years ago
@wjj001-dl Because of overfitting. https://www.tensorflow.org/tutorials/keras/overfit_and_underfit
Hi @tks10 ! Thank you very much for sharing your wonderful work, it has helped me a lot to understand Unet implementation when applying it to my own data. However, I have a lot more to learn, as I am quite new in python.
Regarding @wjj001-dl 's question, I had also detected the problem of overfitting and was looking for a way to use kera's EarlyStopping to automatically stop training when testing loss reaches it's minimum and keep this exact model as best.
The point that is very blurred to me is in line 43
of your code (main.py
):
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_step = tf.train.AdamOptimizer(0.001).minimize(cross_entropy)
combined with line 73
:
sess.run(train_step, feed_dict={model_unet.inputs: inputs, model_unet.teacher: teacher,
model_unet.is_training: True})
As I have seen in other implementations and is also marked in the above link that you have posted, there are "compile" and "fit" steps in training that make it possible to include EarlyStopping or other parameters in the training.
However, when working with tensors things start to become really complicated to me. So, I was wondering if there is a way to insert EarlyStopping in the train_step
that is later called in line 73
and applies the training procedure. I have been trying for 2 weeks to understand and try to apply this change but in vain.
I would be grateful for any possible help Thank you very much in advance
P.S. one more quick question, irrelevant to the previous: In line 20 you create a validation set as part of the training set. So, training and validation sets have common images, i.e. all the images in validation set are also in training set?
hello,Why the test set loss is rising