Open pikaplan opened 7 years ago
You can load/save model at every loop to keep weights. Also note that DNN has its own session, so you do not need to run
sess.run(tf.initialize_all_variables())
Also this is not needed, because you already encapsulate your graph:
tf.reset_default_graph()
i was running into same problem, what worked for me was having clean code, less reallocation of data subsets in different variables in loop, and deleting variables and cleaning memory at end of each k-fold ittr something like this:
#in imports
import gc
#for loop
model.save(..)
del fold
del X
del Y
del TX
del TY
gc.collect()
Hi,
As advised in #187, I tried to call fit() multiple times to train a different model in each fold, but this leads to an out of GPU memory error, apparently the same as in #248. I am using the AlexNet provided in your examples with a small dataset of 120 test 378 train images for 8 classes in each fold.
When I place the model initialization
network=AlexNet()
andmodel=tflearn.DNN
before the start of the for loop, presumably the solution of #248, the resources are not exhausted. But the model weights persisted to the next fold, so we have a fine-tuning instead of an independent training that is needed for 10 fold validation. Is there a way to reset the model weights before each fit() or to flush the old model/trainer from the GPU memory?Thank you in advance for your reply and if I come up with some solution I could gladly contribute.
PS.:The error at the second iteration is the following: