hammerlab / flowdec

TensorFlow Deconvolution for Microscopy Data
Apache License 2.0
89 stars 26 forks source link

Feature request:Voxel sizes for PSF and data #12

Closed dmilkie closed 5 years ago

dmilkie commented 5 years ago

This library really rocks!
Feature request: The voxel sizes for the images to be deconvolved do not always match the voxel sizes of the measured PSF. Can you these be added as parameters and have the necessary interpolation done on the GPU?

eric-czech commented 5 years ago

Thanks @dmilkie ,

3D image scaling with TensorFlow is a bit of a PIA in my experience but to at least get a rough sense of the improvement that might come from this, here is a quick benchmark for bilinear interpolation on GPU vs CPU:

import tensorflow as tf
import numpy as np
from skimage import transform

# Generate ~530MB image with random data for rescaling as well as 
# a target shape at 80% of original
img = np.random.uniform(-1, 1, size=(1440, 1920, 48)).astype(np.float32)
target_shape = tuple((np.array(img.shape)*.8).astype(int))
img.shape, target_shape, img.nbytes
#> ((1440, 1920, 48), (1152, 1536, 38), 530841600)

# Define a method for 3D resize via TF
def resize_volume(new_shape, input_tensor):
    """
    Lift from https://niftynet.readthedocs.io/en/dev/_modules/niftynet/layer/linear_resize.html

    :param new_shape: Target shape (``X, Y, Z``)
    :param input_tensor: Image tensor with shape ``batch, X, Y, Z, Channels``
    :return: interpolated volume
    """
    b_size, x_size, y_size, z_size, c_size = \
        input_tensor.shape.as_list()
    x_size_new, y_size_new, z_size_new = new_shape

    squeeze_b_x = tf.reshape(
        input_tensor, [-1, y_size, z_size, c_size])
    resize_b_x = tf.image.resize_bilinear(
        squeeze_b_x, [y_size_new, z_size_new])
    resume_b_x = tf.reshape(
        resize_b_x, [b_size, x_size, y_size_new, z_size_new, c_size])
    reoriented = tf.transpose(resume_b_x, [0, 3, 2, 1, 4])

    squeeze_b_z = tf.reshape(
        reoriented, [-1, y_size_new, x_size, c_size])
    resize_b_z = tf.image.resize_bilinear(
        squeeze_b_z, [y_size_new, x_size_new])
    resume_b_z = tf.reshape(
        resize_b_z, [b_size, z_size_new, y_size_new, x_size_new, c_size])

    return tf.transpose(resume_b_z, [0, 3, 2, 1, 4])

# Open a session and define input/output tensors to best emulate execution time
# without graph construction and session initialization times (but not transfer to GPU memory)
sess = tf.InteractiveSession()
b_img = img[np.newaxis, ..., np.newaxis]
t_in = tf.placeholder(tf.float32, shape=b_img.shape)
t_out = resize_volume(target_shape, t_in)

%%timeit -r 10 -n 1
t_out.eval({t_in: b_img})
#> 585 ms ± 45.7 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
sess.close()

%%timeit -r 10 -n 1
transform.resize(img, target_shape, mode='constant', anti_aliasing=False)
#> 9.72 s ± 89.1 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)

~10s on a CPU vs half a second on a GPU does seem like it's worth working in somehow, though I'm a little torn on how to do it given that it's a one time transformation and would still be at least an order of magnitude slower than deconvolution itself.

I'm thinking it would probably be best as a standalone context manager with a companion helper function like:

# PSF downscaling factors as X, Y, Z
scale_x, scale_y, scale_z = 1, 1, .8

# For scaling multiple images without recreating the TF graph each time
with flowdec.transform.Transformer() as trans:
    # In a loop over per-channel PSFs
    psf_image_scaled = trans.rescale(psf_image, scale=(scale_x, scale_y, scale_z))

# OR if you don't care about the graph initialization time (which would be negligible with few PSFs)
# a function like this would wrap the above
psf_image_scaled = flowdec.transform.rescale(psf_image, scale=(scale_x, scale_y, scale_z))

Does that make sense to you? Users would still have to do the math on the scaling factors but that would let me continue to avoid having to add anything to the API concerning units or deciding what kind of PSF transforms are detrimental (e.g. upsampling).

dmilkie commented 5 years ago

Thanks, that makes sense. And it looks like someone is already implementing this idea: https://github.com/VolkerH/Lattice_Lightsheet_Deskew_Deconv/blob/master/Python/01_Lattice_Light_Sheet_Deconvolution.ipynb

eric-czech commented 5 years ago

Nice, @VolkerH got you covered! I haven't checked in a while but it's a bummer to see the current public opinion of the likelihood of official OpenCL support in TF as centering around the "probability of alien invasion and finding a way to travel back in time" (https://github.com/tensorflow/tensorflow/issues/22). Sigh..

Anyways for someone who encounters this in the future, gputools is likely a better way to manipulate 3D volumes/PSFs prior to deconvolution.

VolkerH commented 5 years ago

wow, I only created the repo a couple of days ago ... I tried to tag you to let you know about it, but apparently cross-repo tagging is not possible.

I haven't put up a requirements file, but caveat emptor: gputools.affine used to use a different ordering of the coordinates from scipy.ndimage.affine. This has changed in the develop branch that I've been using. I think the master branch still flips coordinates. See https://github.com/maweigert/gputools/issues/12

VolkerH commented 5 years ago

@dmilkie, just saw on your profile you're at Janelia, are you also dealing with LLS data?

dmilkie commented 5 years ago

That’s correct. I’m in Eric Betzig’s lab. I wrote the control software for the microscopes including the Lattice Lightsheet. Previous decon software (including the stuff we had written. which Tahnee and Kate from Monash were asking me for a version that works with RTX cards) was fast but not fast enough to keep up with the acquisition, which meant offline processing. That resulted in more cost (storage, and processing) and hassle. The flowdec stuff sounds like that could really be fast enough to really do this online with the acquisition, which IMO would be awesome. I’m not as familiar with python and tensorflow so I’m eager to piggyback off what you or LLSpy put together.

-Dan

On Feb 8, 2019, at 5:33 PM, VolkerH notifications@github.com wrote:

@dmilkie, just saw on your profile you're at Janelia, are you also dealing with LLS data?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

VolkerH commented 5 years ago

@dmilkie, thanks for the reply. That's good to know. I think the deconvolution will still have to happen offline though, even with flowdec.

I had some discussions with David who runs our lightsheet and had some ideas about live deskew visualization that would have me look into the Labview control software.

I don't want to hijack the flowdec issue for a lattice specific discussion though. If you're interested we can continue the discussion here: https://github.com/VolkerH/Lattice_Lightsheet_Deskew_Deconv/issues/11