Line 170 -174 in file train_model.py may cause much consumption of memory during training
because the assigning operators are in the for block and tf will create new nodes in the graph for each epoch
It's better to define the assigning operators out of the for block and use 'sess.run' to assign the loss variables for each epoch. Of course tf.placeholders for losses are needed.
I guess there may be a better solution for the problem, which is to replace the corresponding tf.variables with tf.placeholders and so it needs no assigning operators but 'sess.run' to get the merged result. I haven't tried this method yet.
Line 170 -174 in file train_model.py may cause much consumption of memory during training because the assigning operators are in the for block and tf will create new nodes in the graph for each epoch It's better to define the assigning operators out of the for block and use 'sess.run' to assign the loss variables for each epoch. Of course tf.placeholders for losses are needed. I guess there may be a better solution for the problem, which is to replace the corresponding tf.variables with tf.placeholders and so it needs no assigning operators but 'sess.run' to get the merged result. I haven't tried this method yet.