hammerlab / flowdec

TensorFlow Deconvolution for Microscopy Data
Apache License 2.0
86 stars 26 forks source link

GPU memory exhaustion errors #15

Closed eric-czech closed 5 years ago

eric-czech commented 5 years ago

via email from Samantha Esteves:

I’m really interested in getting GPU Richardson-Lucy deconvolution working on my machine (Windows10, Quadro M2200 4GB) and am running through the CElegans example notebook. But I get an out of memory error when I runt the deconvolution cell. I’m wondering if this is a true error (GPU requires more than 4GB of memory for this example?) or if there is a configuration error I should be looking into?

eric-czech commented 5 years ago

The fix in this case was to change the padding mode used on the images to avoid resizing them to the next highest power of 2 along each dimension:

# Use pad_mode = 'none' instead of default 'log2'
algo = fd_restoration.RichardsonLucyDeconvolver(n_dims=3, pad_mode='none').initialize()

Additionally, configuring these options will override the default TF behavior where it will attempt to preallocate nearly 100% of GPU memory for every python process (or jupyter kernel) using it, which can be problematic even with a single process:

session_config = tf.ConfigProto()
# allow_growth=True will allocate memory as needed rather than preemptively 
session_config.gpu_options.allow_growth = True
session_config.gpu_options.per_process_gpu_memory_fraction = 1.0
res = {ch: algo.run(acqs[ch], niter=100, session_config=session_config) for ch in acqs}