Open SugarShine opened 6 years ago
Yes, there is a memory leak. I'm actually working on a new version but it's not quite ready yet but you can try replacing the above code with your own normalization function.
@manumathewthomas Thanks so much for your reply, I have solved the memory leak issue. But when I train the model, the loss is always NaN, could you give me some advice or a new version?
try reducing the learning rate
How do you solve this problem, can you share it?
I also got the loss = NaN with my dataset when training even reducing my learning rate? Can you give a advice?
Thanks for your shared code. When I train the model on my own datasets, I found memory leak when running code :
training_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), training_batch)) groundtruth_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), groundtruth_batch))
and after some iteration, when saving checkpoints, the graphdef is larger than 2GB, program crashing. Does anyone meet this issue and how to solve it?
hi,When I train the model with CPU, I encountered the same problem "GraphDef cannot be larger than 2GB.". How did you solve it?
And how do you replace the dataset with your own? I don't know how to process the data.
Thank u very much ,I am a beginner, there are many problems don't understand,I hope I'm not disturbing you.if u're Chinese,can i add your wechat?
Thanks for your shared code. When I train the model on my own datasets, I found memory leak when running code :
training_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), training_batch)) groundtruth_batch = sess.run(tf.map_fn(lambda img: tf.image.per_image_standardization(img), groundtruth_batch))
and after some iteration, when saving checkpoints, the graphdef is larger than 2GB, program crashing. Does anyone meet this issue and how to solve it?