Closed pr4deepr closed 1 year ago
This code from OPM can be used for inspiration:
Just to be sure I understand right: @dpshepherd's OPM code does not show the artifact? That would be awesome!
I think so.
So, I've done the comparison. The sample data I use is anisotropic anyway and we convert it into isotropic data after deskewing. I've uploaded a notebook here: https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/blob/master/notebooks/deskew_artefact_opm_comparison.ipynb
Screenshot where OPM on the left and pyclesperanto deskew on the right:
Screenshot where OPM on the left and pyclesperanto deskew on the right
This doesn't look like the artifact I've seen before. It looks like interpolation on (left) and off (right).
Ahh, sorry. I had a previous version of pyclesperanto for some reason. I've used linear interpolation and it looks somewhat similar..
You can still see some minor differences though.. the image on right looks a bit blurry..
Going to subsample the data as you said and try it again..
Could you try leaving out 3 of 4 z-planes to have much more anisotropic voxels? The artifact should then be very obvious...
Sorry, do you mean delete 3 out of 4 z-planes, i.e., keep slices 1,5,9 etc...?
Yes, something like
image = image[::4]
voxel_size_z = voxel_size_z * 4
OPM on left and pyclesperanto on the right. I've updated the notebook as well.
I'm confused as I didn't realize there would be so much of a difference? Is there something wrong with my code?? Thanks for helping!
I presume, you were just looking into the wrong plane @pr4deepr . as explained here it is best visible in the X-Z plane. I updated your notebook on a new branch. Check out the imshow commands in [4] [5] and [6]:
So yes, the OPM code appears to not show the artifacts. Very cool. We can learn from that. Maybe it has something to do with the position calcualtion here
hi all-
Our approach is an orthogonal interpolation that goes from the tilted reference frame to the coverslip reference frame in one step. This leads to a different interpolation result than applying shear and rotation transformations separately.
We do this in parallel across the volume using Numba. So far in our experience, this is faster or break even with moving data to the GPU and back on our acquisition machine (it does have a lot of CPU cores). We are able to calculate and display fast enough for a preview in our Napari based control code.
Vincent Maioli's PhD thesis directly inspired our approach.
We do this in parallel across the volume using Numba. So far in our experience, this is faster or break even with moving data to the GPU and back on our acquisition machine (it does have a lot of CPU cores).
Yes we know; we benchmarked that. Very impressive indeed! We just need a GPU-implementation because we want to combine deskewing with denoising, deconvolution, segmentation etc without moving data back and forth between CPU and GPU. That's why we're reimplementing pretty much all common operations in OpenCL/clesperanto.
Do you agree @dpshepherd that this code here is computing the four correct positions in the raw data to interpolate from? If so we could implement pretty much the same in our OpenCL kernel for the affine transform, e.g. here. Effectively, that'll be a deskewing-specific affine transform...
And btw thanks for joining our discussion @dpshepherd 😃 🙌
Yes we know; we benchmarked that. Very impressive indeed! We just need a GPU-implementation because we want to combine deskewing with denoising, deconvolution, segmentation etc without moving data back and forth between CPU and GPU. That's why we're reimplementing pretty much all common operations in OpenCL/clesperanto.
Wow, we've weren't that scientific because other approaches weren't around yet when we started. Just went with the "does viewer update fast enough" method! Cool to see. I totally understand why you need the GPU implementation and I think that will be very valuable.
Do you agree @dpshepherd that this code here is computing the four correct positions in the raw data to interpolate from?
Do you mean the sub-sampling you are doing in the scan direction? Or something deeper in OpenCL implementation?
And btw thanks for joining our discussion @dpshepherd 😃 🙌
No problem! Thank you for all you do for the community!
Do you mean the sub-sampling you are doing in the scan direction? Or something deeper in OpenCL implementation?
We're currently using hardware-accelerated interpolation, that comes built-in. However, it comes with the assumption that neighboring pixels (in raw Z) are actually neighbors - which is not the case when deskewing. For example, a pixel 2
has four neighbors 1
in the classical affine transform:
0 0 0 0 0 1 0 0 0
0 0 0 0 1 2 1 0 0
0 0 0 0 0 1 0 0 0
However, a raw skewed image looks like this in the stack:
0 0 0 0 1 0 0 0 0
0 0 0 0 1 2 1 0 0
0 0 0 0 0 0 1 0 0
Because in reality, the pixels were acquired like this:
0 0 0 0 1 0 0 0 0
0 0 0 0 1 2 1 0 0
0 0 0 0 0 0 1 0 0
I conclude we need to replace our interpolation with an explicit interpolation that deals with this shift. I presume the code linked above in the OPM code does that...
Ah, I understand now. Yes, you need to decide which pixels to interpolate. We follow the proposal by Maioli that interpolating in a direction orthogonal to the light sheet reduces the artifacts in the final reconstruction. This is effectively a bi-linear interpolation of the four nearest points (as you noted above). There is a good technical discussing on different interpolation choices starting on page 53 on Maioli's thesis.
Figure 2.9 from his thesis has a good summary of the orthogonal interpolation strategy a: column interpolation. b: orthogonal interpolation. "n" is direction of stage scan
The section discussing his original implementation of orthogonal interpolation starts on page 199 of the thesis.
Edit: Also, I realized looking at our code that the reference to Maioli's thesis has disappeared from the code by somebody's refactor (maybe me, who knows...). I will tag the group that we need to add it back in.
Thanks @dpshepherd , this is super useful. Learning a lot from your code and from these discussions!!
Just saw this through my notifications. The orthogonal interpolation strategy looks interesting. Note that you still should weight the distances based on physical spacing not pixel spacing, i.e. there is an anisotropy between x,y and z. The usual affine transforms on GPU don't handle this anisotropy, they assume voxels are equally spaced in all directions. Therefore, I decided to first perform an affine transform that makes the pixels isotropic before deskewing. This avoids the worst interpolation artifacts already.
Sorry, @VolkerH, didn't see this update. Thanks for your suggestion. Agree with your point. Ideally, we'd want to have it all in one transformation to save on GPU memory and make the processing faster. So, the orthogonal transformation approach is quite interesting.. Also, thanks for your detailed notebooks on lls_dd repo. I referred to them a lot to get an idea of the whole deskewing process and understanding affine transformations.
Hi @pr4deepr ,
would you mind testing if the upstream bugfix solves this issue? It can be installed using pip install pyclesperanto_prototype==0.20.0
and I presume also via conda (in about a day).
Thanks!
Best, Robert
Thanks @haesleinhuepf I still haven't gotten around to it. Will test this soon and post the result here. If its working, I'll cut a new release for napari_lattice
Referencing the release with the abovementioned update for interpolation: https://github.com/clEsperanto/pyclesperanto_prototype/releases/tag/0.20.0
Cheers Pradeep
Tested this and made updates in this pull request: https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/pull/24
For comparison in the future, please use this reference notebook demonstrating the artefacts.. https://github.com/clEsperanto/pyclesperanto_prototype/blob/master/demo/transforms/deskew_artifact_interpolation-0.19.4.ipynb
As we are using a single affine transformation for shearing and rotation, this can introduce artefacts in the final data, especially on non-isotropic data. @haesleinhuepf has raised it here: https://github.com/clEsperanto/pyclesperanto_prototype/issues/196
Turns out, this is a well-known issue and has been documented here: https://github.com/VolkerH/Lattice_Lightsheet_Deskew_Deconv/issues/22 https://github.com/napari/napari/issues/1843 @haesleinhuepf , @VolkerH has discussed an OpenCL-related solution/fix? Not sure I fully understand it
We could technically chain the affine transformations individually, but the intermediate sheared image will be extremely large and cause memory issues on the GPU. Another option is to write an Opencl kernel specifically for deskewing. This code from OPM can be used for inspiration: https://github.com/QI2lab/OPM/blob/5dc5a4f2046d220e09d038ae6d292f3590e4f015/reconstruction/image_post_processing.py#L33