guoqiangqi / PFLD

Implementation of PFLD A Practical Facial Landmark Detector , reference to https://arxiv.org/pdf/1902.10859.pdf
627 stars 166 forks source link

memeory consumption #70

Closed sljlp closed 4 years ago

sljlp commented 4 years ago

Line 170 -174 in file train_model.py may cause much consumption of memory during training because the assigning operators are in the for block and tf will create new nodes in the graph for each epoch It's better to define the assigning operators out of the for block and use 'sess.run' to assign the loss variables for each epoch. Of course tf.placeholders for losses are needed. I guess there may be a better solution for the problem, which is to replace the corresponding tf.variables with tf.placeholders and so it needs no assigning operators but 'sess.run' to get the merged result. I haven't tried this method yet.