Closed eric-czech closed 5 years ago
The fix in this case was to change the padding mode used on the images to avoid resizing them to the next highest power of 2 along each dimension:
# Use pad_mode = 'none' instead of default 'log2'
algo = fd_restoration.RichardsonLucyDeconvolver(n_dims=3, pad_mode='none').initialize()
Additionally, configuring these options will override the default TF behavior where it will attempt to preallocate nearly 100% of GPU memory for every python process (or jupyter kernel) using it, which can be problematic even with a single process:
session_config = tf.ConfigProto()
# allow_growth=True will allocate memory as needed rather than preemptively
session_config.gpu_options.allow_growth = True
session_config.gpu_options.per_process_gpu_memory_fraction = 1.0
res = {ch: algo.run(acqs[ch], niter=100, session_config=session_config) for ch in acqs}
via email from Samantha Esteves: