schlegelp / xform

A package to transform spatial data from one space to another.
GNU General Public License v3.0
2 stars 1 forks source link

Discussion : Apply transform to a 3D image #6

Open marbre2 opened 4 months ago

marbre2 commented 4 months ago

Hello, I was looking for python libraries to:

I found yours libraries navis and xform and installed them. However, I understand that this is actually not adapted to my 3D images but to points lists. Do you have an idea if there are other python libraries I couls use for that, and else, how much work it be to adapt your tools to my task?

I thank you in advance for your reply,

Best regards

schlegelp commented 4 months ago

Hi Marine. Yes, the initial focus of xform (and the corresponding code in navis) was on transforming points not images. Transforming images is substantially more involved - in particular you need to know what your source and target spaces look like.

Lets say you know your target space is a cube of 100um in all dimensions and you want each pixel (voxel) to be 0.5um:

  1. Generate an empty (200, 200, 200) matrix for the target space
  2. For each pixel: a. Generate coordinates in physical space as expected by your transform - e.g. (0, 0, 0), (0, 0, 0.5), etc. if CMTK expects microns b. Map the physical coordinates to the physical source space using the CMTK transform c. Convert the physical coordinates to source pixel/voxel space c. Look-up the value of your source image at the transformed coordinates d. Fill the voxel in your target image with the looked-up value

I have recently added a small module to navis that does that. It is experimental and only exists on Github for now but you can try adapting it for your task.

As a disclaimer though: that implementation seems to work reasonably fast with vector fields but I did get the impression that it is really slow with CMTK transforms. I need to look into it some more but my suspicion is that it would be much more efficient to wrap the cmtk command that actually applies a transform instead of the pixel-by-pixel lookup in the current implementation.

marbre2 commented 4 months ago

Hi Philipp, Thank you very much for your kind and quick reply, I'll follow your advices, and try out the experimental module. In fact, it does not concern many images, that I would like to use for a semi-supervised DL registration task.

i'll let you know about my progresses.

Best regards

clbarnes commented 4 months ago

scipy.ndimage.map_coordinates is probably a good start, if you haven't found it already: get the real-world coordinates of all pixels in your target space, reverse-transform them with xform, and then use map_coordinates to index into the source image.

Chunk the target space if you need to, although if you're using some chunked source store things will get pretty inefficient as chunks are accessed repeatedly. There's a long-outstanding issue to implement this in dask. You could take a crack at it manually by sending all your source chunk requests through an LRU-cached wrapper, then processing your target chunks in Z-order, which isn't a bad first guess for improving cache locality. If you can parallelise things, probably easiest just to partition the space to prevent access overlaps. A more efficient parallelisation scheme might have a shared cache but then you need to deal with a bunch of synchronisation problems.