Closed vivek-sethia closed 6 years ago
not sure if its the right way but maybe change line 71: in train1.py to
tf.train.Saver().save(sess, '{}/epoch_latest_step_latset'.format(logdir, epoch, gs))
so it will save only the latest model
Thanks for the reply. I solved it by setting the saver outside the loop in this way
saver = tf.train.Saver(max_to_keep=2)
and then using inside the loop like this :
saver.save(sess, '{}/epoch_{}_step_{}'.format(logdir2, epoch, gs))
that's probably better have you manged to run train2?
@0i0 I have managed to run train2 but olny with 500 epochs and the results are not good as expected. What about you?
same here. 2000 epochs but with a different dataset
I am trying to run the two networks, and when I ran the two scripts script, the logdir folder used almost 40gb of my disk space and I had to terminate since I did not have much space left of my AWS instance. Any ideas on how to tackle this?
I have reduced to number of epochs for both the script to 500 only.