jlevy44 / PathFlow-MixMatch

Don't mix, match! Simple utilities for improved registration of Histopathology Whole Slide Images.
10 stars 6 forks source link

Scalability and non rigid non linear transformations #1

Closed asmagen closed 4 years ago

asmagen commented 4 years ago

Does this package scale well to WSI of size 15k x 15k pixels and allows for non rigid registration to the level of aligning well close to individual cells? The tissues I'm working with not only shift but also shrink and require non linear transformations.

Thanks

asmagen commented 4 years ago

Also, can you provide a usage example for registering two png images?

jlevy44 commented 4 years ago

Hey @asmagen , this package should scale well to WSI up to 50-60k in any given spatial dimension, given that the tissue can mostly be separated into separate connected components (see paper). Our paper only details macro-architectural alignment for now since that was the dataset we had on-hand, but we are working to acquire the necessary dataset to improve cell level alignment. Maybe once you try the algorithm on your images, you can let us know how it went?

There are still a few loss functions and deformations we will be adding, but some of them do handle non-linear transforms.

I need to push a quick update to handle file types (eg. png) other than npy format, I'll update the package in a few hours and send you some commands.

Thank you for this feedback! Very much appreciated :)

jlevy44 commented 4 years ago

Ok, I just updated the code, but have not yet added a new pypi installation, because the new code has not been tested yet.

Feel free to try it out, instructions can be found in the updated README.

Please do let us know if you encounter bugs (probably because I just pushed quite a few changes rapidly) and also if you are able to have some success here (possibly room for fruitful collaborations if there is interest!).

If you encounter bugs, I'll spend sometime in the next few days debugging. The PyPI installation will work with npy format (easy to convert png to numpy), while the latest build will work with any format readable by npy or cv2.

jlevy44 commented 4 years ago

5

jlevy44 commented 4 years ago

Just to add, should have a dataset to test cell level alignment now, so I may have time over the weekend to try it out myself. Running a few jobs right now that make it difficult to find the memory available to run this job. You are welcome to continue to contribute. All of our discussion so far has been quite beneficial. :)

asmagen commented 4 years ago

May be on interest in regards to applying deformations: Warping drop2

asmagen commented 4 years ago

See new and relevant response here: https://github.com/MRtrix3/mrtrix3/issues/2004#issuecomment-609003607

jlevy44 commented 4 years ago

Hey, thanks for sending this. Unfortunately, I was not able to complete the necessary modifications this weekend, as my qualifying exam is this week.

Getting ITK to MTRix and/or getting the niis to work will be a priority when things clear up. Meanwhile, I will also be working to get the nonlinear transform code online, just may take me a little longer than expected. Thanks for all of your help @asmagen

asmagen commented 4 years ago

Thank you @jlevy44 , looking forward to it soon.

asmagen commented 4 years ago

FYI: https://github.com/airlab-unibas/airlab/issues/19

jlevy44 commented 4 years ago

Ok just finished my qual. I just pushed some code, but haven't been able to test it, the new command line option: apply_drop2_transform

pathflow_mixmatch apply_drop2_transform --source_image [IMAGE TO WARP] --ref_image [WARP TO THIS] --dx [X DISPLACEMENT FROM DROP2] --dy [Y DISPLACEMENT FROM DROP2] --gpu_device -1

You can also try to set gpu_device to 0 and see how memory constraints change when warping on the gpu. Fingers crossed that this works, if not I will debug later today. This should be a quick fix until we debug our own pipeline a bit more.

asmagen commented 4 years ago

Thanks! I'll try it out asap.

asmagen commented 4 years ago

Thanks for incorporating these changes @jlevy44. Since there is a call to target_image object that's not yet defined, I extracted the code to try to continue working with the rest of it. Most of it worked but there's an issue with the object types sent as parameters to displace_image:

>>> source_img=cv2.imread(source_image)
>>> source_img.shape
(4308, 2928, 3)
>>> ref_img=cv2.imread(ref_image)
>>> ref_img.shape
(4302, 2846, 3)
>>> source_img=cv2.resize(source_img,ref_img.shape[:2][::-1])
>>> source_img.shape
(4302, 2846, 3)
>>> dx,dy=nibabel.load(dx).get_fdata(),nibabel.load(dy).get_fdata()
>>> displacement=th.tensor(np.concatenate([dx,dy],-1)).unsqueeze(0).permute(0,2,1,3)
>>> displacement.shape
torch.Size([1, 4302, 2846, 2])
>>> new_img = displace_image(source_img, displacement, gpu_device)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 6, in displace_image
  File "/usr/local/lib/python3.7/site-packages/airlab/transformation/utils.py", line 101, in warp_image
    warped_image = F.grid_sample(image.image, displacement + grid)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/functional.py", line 2711, in grid_sample
    return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
RuntimeError: grid_sampler(): expected input and grid to have same dtype, but input has float and grid has double

Apparently displacement is torch.float64 and source_img is uint8. I also changed the cv2.resize call because it generated an issue with the dimensions that were swapped.

I found that it happens in displace_image requiring a modification in the type from th.float32 to th.float64: im=al.utils.image.create_tensor_image_from_itk_image(im, dtype=th.float64, device=('cuda:{}'.format(gpu_device) if gpu_device>=0 else 'cpu'))` which made it run and complete, but when I open the image I see only black pixels.

Any idea what's happening?

jlevy44 commented 4 years ago

It’s possible that all of the values were zeroed out at some point, I’m not sure though and will look into it. Could be some division or multiplication by 255. Just suggesting some ideas, I’ll look into it. I think we’re close though!

jlevy44 commented 4 years ago

https://github.com/airlab-unibas/airlab/commit/80c9d487c012892c395d63c6d937a67303c321d1

asmagen commented 4 years ago

That what the problem? Is it working for you now?

jlevy44 commented 4 years ago

Yeah, it's working! Just needs some final touches. I would test it out, and then carefully consider testing the following options, which I have not tested in totality and thus am having trouble finding the ideal transform.

Biggest issue remaining is that it is not readily apparent how dx and dy should be applied to build the displacement matrix, so I have added the temporary options --flip_pattern and --flip_xy to experiment and see which combinations of flipping the 1st 2 axes of dx and dy and whether to reverse the order of dx, dy works.

Looking at flip_pattern, setting elements to -1 correspond to: [flip dx axis 0, flip dx axis 1, flip dy axis 0, flip dy axis 1], and flip_xy then reverses the order of [dx, dy] to [dy,dx] before concatenation. I am unsure of which combination of these options are correct. There are 32 options to select from, and one of them is the correct one. When we find this one out, I will hard code it. Check out the code to see how this experiment is being done.

Sorry for the delay!

Try reinstalling from github and running (replace with your own images/files):

 pathflow-mixmatch apply_drop2_transform --flip_pattern [-1,-1,1,1] --flip_xy True --source_image A.png --ref_image B.png --dx field_x.nii.gz --dy field_y.nii.gz --gpu_device -1 --output_file test.warp.png

And let me know which flip_pattern and flip_xy work for you so I can make the changes! I'll experiment more tomorrow. We're close.

Wishing you the best of luck on your research! Seems like a very interesting project.

jlevy44 commented 4 years ago

Ok, I tested all 32 options, and unfortunately, all came back negative. But this is because I forgot to specify --ocompose, as per issue: https://github.com/biomedia-mira/drop2/issues/2 .

I think with this specified, I should be able to get it up and running. @asmagen , remember to specify --ocompose to get the entire deformation field.

I will modify my command to obtain the ideal output!

jlevy44 commented 4 years ago

Now, it appears to be working! Should have it fully functional by tomorrow!

jlevy44 commented 4 years ago

@asmagen Ok! Should be good to go. There may be minor defects related to the interpolation method that I can look into, but give it a shot. I was able to get it to work on my data with minor defects that I may seek to further remedy.

jlevy44 commented 4 years ago
pathflow_mixmatch apply_drop2_transform --source_image [IMAGE TO WARP] --ref_image [WARP TO THIS] --dx [X DISPLACEMENT FROM DROP2] --dy [Y DISPLACEMENT FROM DROP2] --gpu_device -1
asmagen commented 4 years ago

I see we got input here regarding the registration issues with air lab. How can we progress with utilizing the solution here?

sumanthratna commented 4 years ago

Hi @asmagen! I have a draft PR (it's a work in progress) that should fix the nonlinear transformations (#12). It's not ready yet but I'd expect it to be ready in the next few days.

If you're working on a tight deadline, please try out my fork of this repo and let me know if it works for you!

asmagen commented 4 years ago

Hi @sumanthratna, has it resolved or still in progress?

sumanthratna commented 4 years ago

Hi @asmagen! Progress has been a little slow on mixmatch, but I believe that the loss of details in wendland can be resolved by finding the right parameters, which might take some trial-and-error. Unfortunately, I don't know if I'll be able to take another look at it until after May 18.

sumanthratna commented 4 years ago

Update for @asmagen: it turns out that decreasing the learning rate significantly fixes the issue we saw in https://github.com/airlab-unibas/airlab/issues/26. I think the plan is to do some more testing, but I believe the PR should be merged soon.

jlevy44 commented 4 years ago

@asmagen we’re in the middle of testing. I would try the drop2 transformation, which should work, if not, I agree with Sumanth to definitely fork the repo, or contact the airlab developers to ask about the block of code I had added for applying drop2. They may be able to help out

jlevy44 commented 4 years ago

I think our code is very close for the drop2 solution, though may need one or two minute changes.

jlevy44 commented 4 years ago

https://github.com/jlevy44/PathFlow-MixMatch/blob/master/pathflow_mixmatch/cli.py#L482

asmagen commented 4 years ago

Hi @jlevy44 @sumanthratna I'm getting back to the registration now and I wanted to catch up with the latest developments on the issue we had with the transformation and the learning rate. Where do we stand in that context? What is currently working or requiring attention on may end and how to run it? To clarify, I'm referring to regular grayscale image registration where you have a large one or two tissue chunks rather than multiple small components. Thanks