Open Freya-Ebba-Christ opened 4 years ago
Thanks! We'll add this into the main code once we ensure that the code will continue to work for those using a different backend.
I don't know about Theano, but from my experience, this is very much a TF issue. There is no bug in TF regarding this. Everything works as designed. It is just that lots of people seem to misunderstand this.
When running the LSTM decoder in ManyDecoders_FullData and Keras with the TF backend I am experiencing a memory leak. The problem is well known. What seems to work is to explicitly delete the model, clear the session and call the garbage collector by adding
Within the import section, I have also added
from keras.backend.tensorflow_backend import set_session import tensorflow as tf config = tf.compat.v1.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config)
from keras import backend as K import gc K.clear_session() gc.collect()
These changes also make it possible to share a GPU without taking precious GPU memory form the other user/session.
For selecting the GPU(Nvidia only) I run import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "1" # use id from $ nvidia-smi
alternatively,
from keras import backend as K import tensorflow as tf with K.tf.device('/gpu:1'): config = tf.ConfigProto(device_count = {'GPU' : 1}) session = tf.Session(config=config) K.set_session(session)
should also work.