hammerlab / flowdec

TensorFlow Deconvolution for Microscopy Data
Apache License 2.0
86 stars 26 forks source link

Problem with running notebooks in series #13

Closed dmilkie closed 5 years ago

dmilkie commented 5 years ago

Perhaps obvious to others, but I (being a python noob) was struggling to figure out why one notebook was working, but not a second.

I think the answer is: Tensorflow gpu will "map nearly all of the GPU memory of all GPUs". This of course will prevent a second notebook from running after a first notebook is finished, because the first notebook had not released the GPU resources. Once I closed that first notebook, then the second worked.

Am I missing the obvious way to unload tensorflow besides restarting/killing the python kernel?

eric-czech commented 5 years ago

Hey @dmilkie , you can try something like:

session_config = tf.ConfigProto()
session_config.gpu_options.allow_growth = True
session_config.gpu_options.per_process_gpu_memory_fraction = .2
algo = fd_restoration.RichardsonLucyDeconvolver(n_dims=3, pad_mode='none').initialize()
algo.run(acq, niter=niter, session_config=session_config) 

That usually works for me to be able to run multiple notebooks using TF at once. Although, my GPU usually doesn't have enough memory to only use 20% of it for something useful so I've just gotten into the habit of having no more than one or two kernels per GPU running at a time. So far as I know there's no way to have two processes using TF share the same GPU memory so I think that's just a fundamental limitation.

eric-czech commented 5 years ago

I'll close this out now but let me know if you run into anything to suggest it should be reopened @dmilkie (i.e. something relating to TF configuration not already exposed in flowdec).