MathOnco / valis

Virtual Alignment of pathoLogy Image Series
https://valis.readthedocs.io/en/latest/
MIT License
119 stars 29 forks source link

QuPath Annotation Transfer #13

Closed abadgerw closed 1 year ago

abadgerw commented 2 years ago

I have annotations that were made on one slide in QuPath that I would like to transfer onto a serial section. I've tried other methods like QuPath Interactive Image Alignment and Warpy but was told Valis performed better registration. I was reading the documentation and see that Valis is typically used to create a single image from 2 serial sections. Could it be used for my purpose of just tabulating registration parameters in order to transfer annotations to the right place?

I'm not a very savvy coder so any assistance you can provide would be greatly appreciated. Thanks!

cdgatenbee commented 2 years ago

Hi @abadgerw, Yes, Valis should be able to transfer your annotations to another serial section. You would register the slides, read in your annotation points (xy coordinates, in pixel units), and then use the corresponding Slide object to warp the annotated points. This would look something like this:

from valis import registration

# Perform registration
registrar = registration.Valis(slide_src_dir, results_dst_dir)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

# Read in your annotations here and name as annotation_pt_xy. 
# Be sure they are in pixel units, not physical units. See below on how convert them to pixel units if the annotations are in physical units 
# annotation_img_f is the name of the slide the annotations came from
# target_img_f  is the name of the slide you want to transfer the annotations to 

annotation_source_slide = registrar.get_slide(annotation_img_f)
target_slide = registrar.get_slide(target_img_f)

annotations_on_target_slide_xy = annotation_source_slide.warp_xy_from_to(annotation_pt_xy, target_slide)

However, if your annotations are in physical units you'll need to first convert them to pixel units. This can be done using each Slide's metadata:

# Convert to pixel units. 
# annotation_source_slide.units will show you the unit, e.g. µm
# annotation_source_slide.resolution will show you the number of physical units per pixel, i.e. 0.5 would be 0.5 µm/pixel
pt_px = annotation_pt_xy_um/annotation_source_slide.resolution

# Warp points
annotations_on_target_slide_xy_px = annotation_source_slide.warp_xy_from_to(pt_xy, target_slide)

#Convert back to physical units
annotations_on_target_slide_xy_um = target_slide.resolution*annotations_on_target_slide_xy_px

You may also find the the 2nd example in the "Warping Points" section of the README helpful https://valis.readthedocs.io/en/latest/#warping-points. The main difference is that in that example the points are being warped to the registered image space to extract a ROI, but it sounds like you may instead want to transfer the points to their position on a different unwarped image. Hopefully the above example will work for you, but if not please let me know.

Best, -Chandler

abadgerw commented 2 years ago

Thank you so much! I'll try this, see how it goes, and report back.

One potentially naive question before starting: What format would be best to export the image with annotations as? Should I export a full labeled image with annotations or labeled tiles? They annotations are made in QuPath and they have lots of options to choose from: https://qupath.readthedocs.io/en/stable/docs/advanced/exporting_annotations.html

cdgatenbee commented 2 years ago

Hi @abadgerw, Sorry, I guess I had assumed that the annotations were Shape vertices that could be warped and then used to draw the new annotations on a different image (e.g. GeoJSON). However, it sounds like you'd like to warp a binary or labeled image instead. That can be done too, but it would just be different than the example. I'd be happy to help put together some example code on how to do this, but could you tell me a little more about your annotations? How large is the annotation image, and can QuPath save the full image? Or would it be necessary to save only individual ROI? Using the full size labeled image would be the most straight forward, since valis can read the annotation image from file, warp it using the registration parameters, and then save it as a new image. If that's not possible you could also warp the individual ROI, but it would probably be a little more complicated. Either way, let me know and I'll try to put an example together.

Best, -Chandler

abadgerw commented 2 years ago

Thanks, Chandler! It looks like I can export the annotations from QuPath as GeoJSON using the QuPath code on this webpage (https://qupath.readthedocs.io/en/stable/docs/advanced/exporting_annotations.html).

Which approach do you think would work best: exporting the annotations and then warping those onto a serial section or exporting a full labeled image and warping that to place the annotations onto the new images? Since the serial sections are commonly rotated, would the latter be more accurate since features from the serial sections could be used to register the images before annotation transfer?

Attached is an example image with annotations:

Screen Shot 2022-08-22 at 3 28 19 PM

The files are .svs images and the size is upwards of 500mb typically. It looks like QuPath can export the full image and annotations without downsampling as an ome.tif using the script on the following webpage (https://qupath.readthedocs.io/en/stable/docs/advanced/exporting_annotations.html). Here is the output of that: https://drive.google.com/file/d/1Lxf1szuTj_R9ksN24SJQyEZfHB370QC_/view?usp=sharing

Happy to have a chat via teleconference if that would be helpful. Thanks again for all your help!

cdgatenbee commented 2 years ago

Hi @abadgerw, I think maybe the cleanest and most memory efficient way to do this would be to warp the GeoJSON points as described above, save them, and then load into QuPath. That way the only extra file you'll need to save is the warped annotation vertices, which probably wouldn't take up too much space. Plus, the image you are transferring the annotations to won't have to be warped at all.

Alternatively, if you'd like to use the labeled image, you can specify that the image you're using for annotations as the "reference_img_f" (i.e. registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=annotation_img_f ) then you can just warp and save the image(s) you want to transfer the annotation to. Since it's been aligned to the annotated image the labels you already have should all line up without the need to warp the labeled image itself. So it would just look like this

from valis import registration

# Perform registration
# annotation_img_f is the name of the slide the annotations came from
registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=annotation_img_f)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

# target_img_f  is the name of the slide you want to transfer the annotations to 
# dst_f  is the file name of warped slide. Since it's been aligned to the annotated image the labels should 

target_slide = registrar.get_slide(target_img_f)
target_slide.warp_and_save_slide(dst_f)

# Load dst_f in QuPath with the original labeled image

The downside here is that the warped slide could have some deformations from the non-rigid registration, but hopefully not bad.

A third option would be to warp the labeled image onto the unwarped target image, but unfortunately I don't have something like that coded up just yet :( I could see it being useful in cases like this though, so if it's something you think you would use let me know and I'll try to put something together.

Best, -Chandler

abadgerw commented 2 years ago

Thank! I think I may be confused on the differences and potential pros/cons. As background, originally, I have been using the QuPath interactive image alignment feature. I would first rotate the annotated image so that it approximates the target image and then use the affine transformation to get a better match. I'd extract the values and input it into the following script:

Align Images and Transfer Annotations.txt

However, I was hoping to get better registration between the image with the annotations and the target slide as the annotations commonly are not exactly in the place I'd like. Therefore, I figured something that is a bit more advanced to the affine transformation would do the trick.

Therefore, would warping the labeled image onto the unwarped image (option 3) be the most similar to the approach I have been working with? Is there any downside to just warping the annotation vertices (option 1) rather than the labeled image itself (option 3)?

My apologies for all the naive questions. Thanks for all your help and support with this! I've just gotten VALIS downloaded onto our cluster computer and will be able to have a go and see how it all shakes out.

cdgatenbee commented 2 years ago

Hi @abadgerw, glad to hear you were able to get valis installed on the cluster but sorry for not more clearly articulating the pros and cons of each approach. But, I think warping the either the annotation vertices or the labeled image would be most similar to your current approach. I can't think of what the downsides would be of just warping the vertices, except possibly the polygon becoming "invalid". But it's hard to know beforehand, and I think it would be fixable, using something like Shapley to re-order the vertices (if necessary). Or QuPath may have some way to handle it? Sorry, I just haven't worked with QuPath's annotations before, but I think i may put together an example annotation and try this out to see how it goes.

If you'd like to warp the labeled image and apply it to the unwarped target image, you could take a slightly different approach. Instead of having "reference_img_f" be the image associated with the annotations, you could make it the target image. That way you would be aligning your annotated image to the target image, and then apply those same transformations to the labeled image. This would work ok if it's just a pair of images, but if you have more than two to align you would have to stick to doing pairwise registrations from your annotated image to the target image. At least until I've put together code that can warp one image onto another unwarped image. Anyhow, that would look like this:

from valis import registration, slide_io

# Perform registration
# target_img_f  is the name of the slide you want to transfer the annotations to, and it will not get warped
registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=target_img_f)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

# annotation_img_f is the name of the slide the annotations are based on
# labeled_img_f is the name of the labeled slide, e.g. MS018C_Iba1-labels.ome.tif
annotation_slide = registrar.get_slide(annotation_img_f)
warped_labeled_img = annotation_slide.warp_slide(src_f=labeled_img_f)

# Create some basic metadata. Only really necessary if you want to include the physical units and/or channel names
bf_dtype = slide_io.vips2bf_dtype(warped_labeled_img.format)
xyzct = slide_io.get_shape_xyzct((warped_labeled_img.width, warped_labeled_img.height), warped_labeled_img.bands)
px_phys_size = annotation_slide.reader.metadata.pixel_physical_size_xyu
# Note that you can add channel names if your labeled image has a different classification in each channel
new_ome = slide_io.create_ome_xml(xyzct, bf_dtype, is_rgb=False, pixel_physical_size_xyu=px_phys_size)
ome_xml = new_ome.to_xml()

# dst_f  is the file name of warped labeled image.
slide_io.save_ome_tiff(warped_labeled_img, dst_f=dst_f, ome_xml=ome_xml)

After that you should hopefully be able to load the registered labeled image into QuPath and thereby apply the annotations to your unwarped target image.

Please let me know how it goes, and I'm more than happy to help if you run into some issues or have other questions.

Best, -Chandler

cdgatenbee commented 2 years ago

Hi @abadgerw, I thought you might like to know that valis now offers the ability to warp one image onto another, which you can now use to transfer your annotations as labeled images to load in QuPath. Here's some example code, which is also included in the examples folder.

"""
Example showing how to warp an annotation image to go onto another image.

`slide_src_dir` is where the slides to be registered are located
`results_dst_dir` is where to save the results
`labeled_img_f` is the filename of actual annotation image
"""

import os
import pathlib
from valis import registration, slide_io, viz, warp_tools

# Perform registration. Can optionally set the reference image to be the same as the one the annotations are based on (i.e. `reference_img_f`)
registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=reference_img_f)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

# Load labeled image saved as `labeled_img_f`
labeled_img_reader_cls = slide_io.get_slide_reader(labeled_img_f)
labeled_img_reader = labeled_img_reader_cls(labeled_img_f)
labeled_img = labeled_img_reader.slide2vips(level=0)

# Have reference slide warp the labeled image onto the others and save the results
reference_slide = registrar.get_slide(reference_img_f)

annotations_dst_dir = os.path.join(registrar.dst_dir, "annotations")
pathlib.Path(annotations_dst_dir).mkdir(exist_ok=True, parents=True)

# Create and save an annotation image for each slide
for slide_obj in registrar.slide_dict.values():
    if slide_obj == reference_slide:
        continue

    # Warp the labeled image from the annotation image to this different slide #
    transferred_annotation_img = reference_slide.warp_img_from_to(labeled_img,
                                    to_slide_obj=slide_obj,
                                    interp_method="nearest")

    # Create image metadata. Note that you could add channel names if your labeled image has a different classification in each channel
    bf_dtype = slide_io.vips2bf_dtype(transferred_annotation_img.format)
    xyzct = slide_io.get_shape_xyzct((transferred_annotation_img.width, transferred_annotation_img.height), transferred_annotation_img.bands)
    px_phys_size = reference_slide.reader.metadata.pixel_physical_size_xyu
    new_ome = slide_io.create_ome_xml(xyzct, bf_dtype, is_rgb=False, pixel_physical_size_xyu=px_phys_size)
    ome_xml = new_ome.to_xml()

    # Save the labeled image as an ome.tiff #
    dst_f = os.path.join(annotations_dst_dir, f"{slide_obj.name}_annotations.ome.tiff")
    slide_io.save_ome_tiff(transferred_annotation_img, dst_f=dst_f, ome_xml=ome_xml)

    # Save a thumbnail with the annotation on top of the image #
    small_annotation_img = warp_tools.resize_img(transferred_annotation_img, slide_obj.image.shape[0:2])
    small_annotation_img_np = warp_tools.vips2numpy(small_annotation_img)
    small_img_with_annotation = viz.draw_outline(slide_obj.image, small_annotation_img_np)
    thumbnail_dst_f = os.path.join(annotations_dst_dir, f"{slide_obj.name}_annotated.png")
    warp_tools.save_img(thumbnail_dst_f, small_img_with_annotation)

Here are examples of what the thumbnails might look like: DCIS_11_HER2_CD3_annotated

DCIS_11_CD31_COX2_annotated DCIS_11_ER_CD68_annotated

I haven't yet tried warping the annotation vertices output by QuPath (still plan to though), but hopefully this will be helpful when you're trying to transfer annotations. If you have any questions, please don't hesitate to ask.

Best, -Chandler

abadgerw commented 2 years ago

Chandler,

Thank you so much! I'm currently out of office for 2 weeks but will delve into applying these fantastic tools upon my return and will report back.

Thanks again for all your help and support with this!

yluo0512 commented 2 years ago

@cdgatenbee Thanks for this great package! I been exploring valis, and have couple questions.

  1. can we do pair-wise image registration in valis? For example, now I have 20 H&E images with annotation, and I want to batch align these H&E images to their paired Masson's trichrome images, and transfer these H&E's annotations to Masson't trichrome. Is that possible?
cdgatenbee commented 2 years ago

Hi @yluo0512,

Glad to hear you've been enjoying valis :) To answer your question, yes, it is possible to do pairwise alignments. There are two approaches you can use:

  1. For each sample, put each pair of images in a single folder and pass the name of the folder to the src_dir argument when you create the Valis object. Just be sure that the directory contains only the images you want to register, as valis will try align everything that is in there. Or,
  2. Pass a list of the pair of images' filenames to img_list and set src_dir to your working directory when you create the Valis object. In this scenario, valis will only try to align the images in img_list, and since that list contains the full path to each image, the images can be located anywhere (e.g. your H&E image in a folder with 19 other H&E images). Also, src_dir isn't really used in that case, so it doesn't have to be your working directory, it can actually be whatever you'd like.

Since you'll be transferring the annotations from the H&E images, you might want to set reference_img_f to be the H&E filename, and the align_to_reference=True when you initialize the Valis object (not required for this to work, though). Once complete, you should be able to warp and save the annotation image as shown in the example above.

I hope that helps, and if you have any more questions, please don't hesitate to ask.

Best, Chandler

yluo0512 commented 2 years ago

Hi @yluo0512,

Glad to hear you've been enjoying valis :) To answer your question, yes, it is possible to do pairwise alignments. There are two approaches you can use:

  1. For each sample, put each pair of images in a single folder and pass the name of the folder to the src_dir argument when you create the Valis object. Just be sure that the directory contains only the images you want to register, as valis will try align everything that is in there. Or,
  2. Pass a list of the pair of images' filenames to img_list and set src_dir to your working directory when you create the Valis object. In this scenario, valis will only try to align the images in img_list, and since that list contains the full path to each image, the images can be located anywhere (e.g. your H&E image in a folder with 19 other H&E images). Also, src_dir isn't really used in that case, so it doesn't have to be your working directory, it can actually be whatever you'd like.

Since you'll be transferring the annotations from the H&E images, you might want to set reference_img_f to be the H&E filename, and the align_to_reference=True when you initialize the Valis object (not required for this to work, though). Once complete, you should be able to warp and save the annotation image as shown in the example above.

I hope that helps, and if you have any more questions, please don't hesitate to ask.

Best, Chandler

@cdgatenbee Hi Chandler,

Thank you very much for your response! I will give it a go based on your suggestions. The problem is that each of image has 3 sections. I found an example image online https://global.discourse-cdn.com/business4/uploads/imagej/original/3X/8/6/869d5e6f3bd5b99796f050c6214c87c05cfbc4e8.jpeg Since each image has multiple sections, I often found the image alignment might work well for one section but might not be for others. I think about splitting H&E images into 3 part, and align them with the according paired Masson's trichrome section. But because the annotation was done based on each individual image (so one image(contains 3 section) has one geojson file), so I couldn't actually split the geojson file into three part. Therefore, even I split the image into three parts, I can't transfer the annotation from one to other since I do not have according section's geojson file.

I would love to hear your opinion on this! Or does any Valis's functionality could help with this problem? Thank you!

cdgatenbee commented 1 year ago

Hi @yluo0512 , I see, that is a bit more complicated than I had realized. I think something you could do is if you know the point at which you cut each image, you could subtract those values from the warped coordinates in the geojson file. For example, say you are transferring the annotations for the 2nd (middle) section, and sliced the H&E image image at x= 100. After registering the images, you first subtract 100 from each x value in geojson file, and then warp the coordinates. The idea is basically that you will be shifting the annotations to their position in the cut image before warping. Do you think something like this would work?

Best, Chandler

abadgerw commented 1 year ago

@cdgatenbee my apologies for the delay in getting back to trying these options out. Valis has been downloaded onto our cluster computer and I've attempted both the warping of annotation points and warping of the annotated image scripts you highlighted above and have run into a couple of issues.

For warping the annotation points, I ran the following:

from valis import registration
slide_src_dir = "/hpcdata/li_mdis/li_mdis_data/waldmanad/Valis/GeoJSON"
results_dst_dir = "./slide_registration_example_GeoJSON"
registered_slide_dst_dir = "./slide_registration_example_GeoJSON/registered_slides"

registrar = registration.Valis(slide_src_dir, results_dst_dir)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

annotation_pt_xy = "MS040_SC_T.geojson"
annotation_img_f = "MS040_SC_T_TMEM119_40X.svs"
target_img_f = "MS040_SC_T_PLP_40X.svs"

annotation_source_slide = registrar.get_slide(annotation_img_f)
target_slide = registrar.get_slide(target_img_f)

annotations_on_target_slide_xy = annotation_source_slide.warp_xy_from_to(annotation_pt_xy, target_slide)

After the last command, I get the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/registration.py", line 1135, in warp_xy_from_to
    warp_tools.warp_xy_from_to(xy=xy,
  File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/warp_tools.py", line 2330, in warp_xy_from_to
    xy_in_reg_space = warp_xy(xy, M=from_M,
  File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/warp_tools.py", line 2176, in warp_xy
    warped_xy = _warp_xy_numpy(xy, M, transformation_src_shape_rc=transformation_src_shape_rc,
  File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/warp_tools.py", line 2074, in _warp_xy_numpy
    in_src_xy = xy/src_sxy
TypeError: ufunc 'divide' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

For warping the annotated image, I ran the following:

import os
import pathlib
from valis import registration, slide_io, viz, warp_tools

slide_src_dir = "/hpcdata/li_mdis/li_mdis_data/waldmanad/Valis/OME"
results_dst_dir = "./slide_registration_example_OME"
registered_slide_dst_dir = "./slide_registration_example_OME/registered_slides"

labeled_img_f = "MS040_SC_T_TMEM119_40X-labels-multi.ome.tif"
reference_img_f = "MS040_SC_T_TMEM119_40X.svs"

registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=reference_img_f, align_to_reference=True)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

labeled_img_reader_cls = slide_io.get_slide_reader(labeled_img_f)
labeled_img_reader = labeled_img_reader_cls(labeled_img_f)
labeled_img = labeled_img_reader.slide2vips(level=0)

annotations_dst_dir = os.path.join(registrar.dst_dir, "annotations")
pathlib.Path(annotations_dst_dir).mkdir(exist_ok=True, parents=True)

for slide_obj in registrar.slide_dict.values():
    if slide_obj == reference_slide:
        continue

    transferred_annotation_img = reference_slide.warp_img_from_to(labeled_img,
                                    to_slide_obj=slide_obj,
                                    interp_method="nearest")

    bf_dtype = slide_io.vips2bf_dtype(transferred_annotation_img.format)
    xyzct = slide_io.get_shape_xyzct((transferred_annotation_img.width, transferred_annotation_img.height), transferred_annotation_img.bands)
    px_phys_size = reference_slide.reader.metadata.pixel_physical_size_xyu
    new_ome = slide_io.create_ome_xml(xyzct, bf_dtype, is_rgb=False, pixel_physical_size_xyu=px_phys_size)
    ome_xml = new_ome.to_xml()

    dst_f = os.path.join(annotations_dst_dir, f"{slide_obj.name}_annotations.ome.tiff")
    slide_io.save_ome_tiff(transferred_annotation_img, dst_f=dst_f, ome_xml=ome_xml)

    small_annotation_img = warp_tools.resize_img(transferred_annotation_img, slide_obj.image.shape[0:2])
    small_annotation_img_np = warp_tools.vips2numpy(small_annotation_img)
    small_img_with_annotation = viz.draw_outline(slide_obj.image, small_annotation_img_np)
    thumbnail_dst_f = os.path.join(annotations_dst_dir, f"{slide_obj.name}_annotated.png")
    warp_tools.save_img(thumbnail_dst_f, small_img_with_annotation)

For the last set of commands starting with the "for" my terminal stops responding right away and needs to be force quit.

If I remove the following bit of code it runs but very slowly (it is currently at 4.7% after 47 minutes of running which makes me think I may be doing something wrong?):

    small_annotation_img = warp_tools.resize_img(transferred_annotation_img, slide_obj.image.shape[0:2])
    small_annotation_img_np = warp_tools.vips2numpy(small_annotation_img)
    small_img_with_annotation = viz.draw_outline(slide_obj.image, small_annotation_img_np)
    thumbnail_dst_f = os.path.join(annotations_dst_dir, f"{slide_obj.name}_annotated.png")
    warp_tools.save_img(thumbnail_dst_f, small_img_with_annotation)

Edit: It ultimately stopped responding at 6.2% after 62 minutes and led me to force quit.

Here is a link with the files I was using in case that is helpful: https://drive.google.com/drive/folders/1tND01fV9JDP7oPQB31Mz5zxCCyvSKFas?usp=sharing

Thanks for all your help!

abadgerw commented 1 year ago

@cdgatenbee, as an update, I spoke to our cluster computer experts and they unfortunately didn't seem to have an insight into the error message above and the crashing issue. Do you think it's specific to using the program on the cluster or something else?

As always, thanks for your help with this fantastic package!

cdgatenbee commented 1 year ago

Hi @abadgerw, Sorry I haven't had a chance to work on this using your test files yet, but I'll take a look now. I have some guesses about what is going on. I've run valis on several different clusters, so I don't think that is the issue. I'll let you know how it goes.

Best, Chandler

abadgerw commented 1 year ago

Sounds good, @cdgatenbee! Thanks for your help and do not hesitate to let me know if you need anything on my end for the troubleshooting.

cdgatenbee commented 1 year ago

Hi @abadgerw, I believe I've figured out the issue with warping the geojson file. The example code I had provided assumed the input would be the actual coordinates, not the file, which is why you were getting the error. So, I put together some code that will read the geojson file, warp all of the annotations, and return a geojson dictionary that can be saved as a .geojson, then dragged and dropped into QuPath. I'll post the code below, but will also include it in the next update, but where it's the slide object warping the points, to be consistent with how images and xy coordinates are warped. I'd also like to add an example to readme, but if I do would you mind if I share the screenshots? Anyhow, here is how it turned out:

Original annotation annotation_original

Transferred annotation annotation_transferred

And here is the code


import numpy as np
import json
import os
import pathlib
import shapely
import tqdm
from copy import deepcopy

from valis import registration, valtils, warp_tools

def clip_xy(xy, shape_rc):
    """Clip xy coordintaes to be within image

    """
    clipped_x =  np.clip(xy[:, 0], 0, shape_rc[1])
    clipped_y =  np.clip(xy[:, 1], 0, shape_rc[0])

    clipped_xy = np.dstack([clipped_x, clipped_y])[0]
    return clipped_xy

def _warp_shapely(geom, warp_fxn, warp_kwargs):
    """Warp a shapely geometry
    Based on shapely.ops.trasform

    """
    if "dst_shape_rc" in warp_kwargs:
        dst_shape_rc = warp_kwargs["dst_shape_rc"]
    elif "to_dst_shape_rc" in warp_kwargs:
        dst_shape_rc = warp_kwargs["to_dst_shape_rc"]
    else:
        dst_shape_rc  = None

    if geom.geom_type in ("Point", "LineString", "LinearRing", "Polygon"):
        if geom.geom_type in ("Point", "LineString", "LinearRing"):
            warped_xy = warp_fxn(np.vstack(geom.coords), **warp_kwargs)
            if dst_shape_rc is not None:
                warped_xy = clip_xy(warped_xy, dst_shape_rc)

            return type(geom)(warped_xy.tolist())

        elif geom.geom_type == "Polygon":
            shell_xy = warp_fxn(np.vstack(geom.exterior.coords), **warp_kwargs)
            if dst_shape_rc is not None:
                shell_xy = clip_xy(shell_xy, dst_shape_rc)
            shell = type(geom.exterior)(shell_xy.tolist())
            holes = []
            for ring in geom.interiors:
                holes_xy = warp_fxn(np.vstack(ring.coords), **warp_kwargs)
                if dst_shape_rc is not None:
                    holes_xy = clip_xy(holes_xy, dst_shape_rc)

                holes.append(type(ring)(holes_xy))

            return type(geom)(shell, holes)

    elif geom.geom_type.startswith("Multi") or geom.geom_type == "GeometryCollection":
        return type(geom)([_warp_shapely(part, warp_fxn, warp_kwargs) for part in geom.geoms])
    else:
        raise shapely.errors.GeometryTypeError(f"Type {geom.geom_type!r} not recognized")

def warp_shapely_geom_from_to(geom, from_M=None, from_transformation_src_shape_rc=None,
                   from_transformation_dst_shape_rc=None, from_src_shape_rc=None,
                   from_dst_shape_rc=None,from_bk_dxdy=None, from_fwd_dxdy=None,
                   to_M=None, to_transformation_src_shape_rc=None,
                   to_transformation_dst_shape_rc=None, to_src_shape_rc=None,
                   to_dst_shape_rc=None, to_bk_dxdy=None, to_fwd_dxdy=None):
    """
    Warp xy points using M and/or bk_dxdy/fwd_dxdy. If bk_dxdy is provided, it will be inverted to  create fwd_dxdy

    Parameters
    ----------
    geom : shapely.geometery
        Shapely geom to warp

    M : ndarray, optional
         3x3 affine transformation matrix to perform rigid warp

    transformation_src_shape_rc : (int, int)
        Shape of image that was used to find the transformation.
        For example, this could be the original image in which features were detected

    transformation_dst_shape_rc : (int, int), optional
        Shape of the image with shape `transformation_src_shape_rc` after warping.
        This could be the shape of the original image after applying `M`.

    src_shape_rc : optional, (int, int)
        Shape of the image from which the points originated. For example,
        this could be a larger/smaller version of the image that was
        used for feature detection.

    dst_shape_rc : optional, (int, int)
        Shape of image (with shape `src_shape_rc`) after warping

    bk_dxdy : ndarray, pyvips.Image
        (2, N, M) numpy array of pixel displacements in the x and y
        directions from the reference image. dx = bk_dxdy[0],
        and dy=bk_dxdy[1]. If `bk_dxdy` is not None, but
        `fwd_dxdy` is None, then `bk_dxdy` will be inverted to warp `xy`.

    fwd_dxdy : ndarray, pyvips.Image
        Inverse of bk_dxdy. dx = fwd_dxdy[0], and dy=fwd_dxdy[1].
        This is what is actually used to warp the points.

    pt_buffer : int
        If `bk_dxdy` or `fwd_dxdy` are pyvips.Image object, then
        pt_buffer` determines the size of the window around the point used to
        get the local displacements.

    Returns
    -------
    warped_geom : shapely.geom
       Warped `geom`

    """

    warp_kwargs = {"from_M": from_M,
                   "from_transformation_src_shape_rc": from_transformation_src_shape_rc,
                   "from_transformation_dst_shape_rc": from_transformation_dst_shape_rc,
                   "from_src_shape_rc": from_src_shape_rc,
                   "from_dst_shape_rc":from_dst_shape_rc,
                   "from_bk_dxdy":from_bk_dxdy,
                   "from_fwd_dxdy":from_fwd_dxdy,
                   "to_M":to_M,
                   "to_transformation_src_shape_rc": to_transformation_src_shape_rc,
                   "to_transformation_dst_shape_rc": to_transformation_dst_shape_rc,
                   "to_src_shape_rc": to_src_shape_rc,
                   "to_dst_shape_rc": to_dst_shape_rc, "to_bk_dxdy": to_bk_dxdy,
                   "to_fwd_dxdy":to_fwd_dxdy}

    warped_geom = _warp_shapely(geom, warp_tools.warp_xy_from_to, warp_kwargs)

    return warped_geom

def warp_geojson_from_to(geojson_f, annotation_slide, to_slide_obj, src_slide_level=0, src_pt_level=0,
                        dst_slide_level=0, non_rigid=True):
    """Warp geoms in geojson file from annotation slide to another unwarped slide

    Takes a set of geometries found in this annotation slide, and warps them to
    their position in the unwarped "to" slide.

    Parameters
    ----------
    geojson_f : str
        Path to geojson file containing the annotation geometries

    to_slide_obj : Slide
        Slide to which the points will be warped. I.e. `xy`
        will be warped from this Slide to their position in
        the unwarped slide associated with `to_slide_obj`.

    src_pt_level: int, tuple, optional
        Pyramid level of the slide/image in which `xy` originated.
        For example, if `xy` are from the centroids of cell segmentation
        performed on the unwarped full resolution image, this should be 0.
        Alternatively, the value can be a tuple of the image's shape (row, col)
        from which the points came. For example, if `xy` are  bounding
        box coordinates from an analysis on a lower resolution image,
        then pt_level is that lower resolution image's shape (row, col).

    dst_slide_level: int, tuple, optional
        Pyramid level of the slide/image in to `xy` will be warped.
        Similar to `src_pt_level`, if `dst_slide_level` is an int then
        the points will be warped to that pyramid level. If `dst_slide_level`
        is the "to" image's shape (row, col), then the points will be warped
        to their location in an image with that same shape.

    non_rigid : bool, optional
        Whether or not to conduct non-rigid warping. If False,
        then only a rigid transformation will be applied.

    """

    if np.issubdtype(type(src_pt_level), np.integer):
        src_pt_dim_rc = annotation_slide.slide_dimensions_wh[src_pt_level][::-1]
    else:
        src_pt_dim_rc = np.array(src_pt_level)

    if np.issubdtype(type(dst_slide_level), np.integer):
        to_slide_src_shape_rc = to_slide_obj.slide_dimensions_wh[dst_slide_level][::-1]
    else:
        to_slide_src_shape_rc = np.array(dst_slide_level)

    if src_slide_level != 0:
        if np.issubdtype(type(src_slide_level), np.integer):
            aligned_slide_shape = annotation_slide.val_obj.get_aligned_slide_shape(src_slide_level)
        else:
            aligned_slide_shape = np.array(src_slide_level)
    else:
        aligned_slide_shape = annotation_slide.aligned_slide_shape_rc

    if non_rigid:
        src_fwd_dxdy = annotation_slide.fwd_dxdy
        dst_bk_dxdy = to_slide_obj.bk_dxdy

    else:
        src_fwd_dxdy = None
        dst_bk_dxdy = None

    with open(geojson_f) as f:
        annotation_geojson = json.load(f)

    warped_features = [None]*len(annotation_geojson["features"])
    for i, ft in tqdm.tqdm(enumerate(annotation_geojson["features"])):
        geom = shapely.geometry.shape(ft["geometry"])
        warped_geom = warp_shapely_geom_from_to(geom=geom,
                                        from_M=annotation_slide.M,
                                        from_transformation_dst_shape_rc=annotation_slide.reg_img_shape_rc,
                                        from_transformation_src_shape_rc=annotation_slide.processed_img_shape_rc,
                                        from_dst_shape_rc=aligned_slide_shape,
                                        from_src_shape_rc=src_pt_dim_rc,
                                        from_fwd_dxdy=src_fwd_dxdy,
                                        to_M=to_slide_obj.M,
                                        to_transformation_src_shape_rc=to_slide_obj.processed_img_shape_rc,
                                        to_transformation_dst_shape_rc=to_slide_obj.reg_img_shape_rc,
                                        to_src_shape_rc=to_slide_src_shape_rc,
                                        to_dst_shape_rc=aligned_slide_shape,
                                        to_bk_dxdy=dst_bk_dxdy
                                        )

        warped_ft = deepcopy(ft)
        warped_ft["geometry"] = shapely.geometry.mapping(warped_geom)
        warped_features[i] = warped_ft

    warped_geojson = {"type":annotation_geojson["type"], "features":warped_features}

    return warped_geojson

annotation_geojson_f = "MS040_SC_T.geojson"
annotation_img_f = "MS040_SC_T_TMEM119_40X.svs"
target_img_f = "MS040_SC_T_PLP_40X.svs"
warped_geojson_annotation_f = f"{valtils.get_name(target_img_f)}_annotations.geojson"

# Perform registration
registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=annotation_img_f)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

# Transfer annotation
annotation_geojson_f = os.path.join(annotation_dir, "MS040_SC_T.geojson")
annotation_source_slide = registrar.get_slide(annotation_img_f)
target_slide = registrar.get_slide(target_img_f)

warped_geojson = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide)

# Save annotation, which can be dragged and dropped into QuPath
with open(warped_geojson_annotation_f, 'w') as f:
   json.dump(warped_geojson, f)

Hopefully the above works, but if not, please let me know. I still need to look into why resizing the annotated images was taking so long, but it might be due to the sheer size of the image, i.e. it was trying to resize a 37847 x 40518 x 44 image. I'll test it out to confirm though, but I thought I would go ahead and share this for now.

Best, Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! You can most definitely use the screenshots for the README. The annotation transfer looks quite good! I will see if I can recapitulate the results and check how it performs with some other image sets. If I have multiple serial sections from the same case do you recommend that I warp the annotations to all of them at once or do it in a pairwise way? Would either option be more accurate?

As for the annotated image warping, do you think it is because I was using a multichannel (with 8 channels) rather than a binary image? I have uploaded a binary version to the Google Drive link in case that is useful.

cdgatenbee commented 1 year ago

Sorry for the delayed response, @abadgerw. If the primary goal is to transfer the annotations, then you could set align_to_reference=True when you initialize the Valis object. This will have a similar effect to doing separate pairwise alignments to the reference image, but you'll only need to perform 1 registration. Then, you should be able to loop through each target slide to warp the annotation files for use in QuPath. I haven't had a chance to check out the binary annotated image, but I would imagine it would resize quite a bit faster than the 8 channel one. I'll check it out to confirm though.

Best, -Chandler

abadgerw commented 1 year ago

@cdgatenbee Hope you had a nice holiday! I have been doing some more benchmarking and all looks quite good so far with warping annotations for transfer. However, ~10% of the time I seem to run into an issue. I'm running on a cluster computer and ~10% of the cases end up having the process killed for some reason. I can get it to run if I run these cases one by one. However, they take quite a long time and when I inspect the registration at least one of the stains seems to only be picking up a small piece of the tissue.

I'm attaching the inputs and outputs for one example here: https://drive.google.com/drive/folders/1m_sbhxthZymHpKj7m863Bi3j4ElLTqRH?usp=sharing

As you can see, the PLP image seems to be the culprit even though the input image quality is good. Do you have any insight into how I can address this when it happens and/or prevent this?

I used the the following code:

import numpy as np
import json
import os
import pathlib
import shapely
import tqdm
from copy import deepcopy

from valis import registration, valtils, warp_tools

def clip_xy(xy, shape_rc):
    """Clip xy coordintaes to be within image

    """
    clipped_x =  np.clip(xy[:, 0], 0, shape_rc[1])
    clipped_y =  np.clip(xy[:, 1], 0, shape_rc[0])

    clipped_xy = np.dstack([clipped_x, clipped_y])[0]
    return clipped_xy

def _warp_shapely(geom, warp_fxn, warp_kwargs):
    """Warp a shapely geometry
    Based on shapely.ops.trasform

    """
    if "dst_shape_rc" in warp_kwargs:
        dst_shape_rc = warp_kwargs["dst_shape_rc"]
    elif "to_dst_shape_rc" in warp_kwargs:
        dst_shape_rc = warp_kwargs["to_dst_shape_rc"]
    else:
        dst_shape_rc  = None

    if geom.geom_type in ("Point", "LineString", "LinearRing", "Polygon"):
        if geom.geom_type in ("Point", "LineString", "LinearRing"):
            warped_xy = warp_fxn(np.vstack(geom.coords), **warp_kwargs)
            if dst_shape_rc is not None:
                warped_xy = clip_xy(warped_xy, dst_shape_rc)

            return type(geom)(warped_xy.tolist())

        elif geom.geom_type == "Polygon":
            shell_xy = warp_fxn(np.vstack(geom.exterior.coords), **warp_kwargs)
            if dst_shape_rc is not None:
                shell_xy = clip_xy(shell_xy, dst_shape_rc)
            shell = type(geom.exterior)(shell_xy.tolist())
            holes = []
            for ring in geom.interiors:
                holes_xy = warp_fxn(np.vstack(ring.coords), **warp_kwargs)
                if dst_shape_rc is not None:
                    holes_xy = clip_xy(holes_xy, dst_shape_rc)

                holes.append(type(ring)(holes_xy))

            return type(geom)(shell, holes)

    elif geom.geom_type.startswith("Multi") or geom.geom_type == "GeometryCollection":
        return type(geom)([_warp_shapely(part, warp_fxn, warp_kwargs) for part in geom.geoms])
    else:
        raise shapely.errors.GeometryTypeError(f"Type {geom.geom_type!r} not recognized")

def warp_shapely_geom_from_to(geom, from_M=None, from_transformation_src_shape_rc=None,
                   from_transformation_dst_shape_rc=None, from_src_shape_rc=None,
                   from_dst_shape_rc=None,from_bk_dxdy=None, from_fwd_dxdy=None,
                   to_M=None, to_transformation_src_shape_rc=None,
                   to_transformation_dst_shape_rc=None, to_src_shape_rc=None,
                   to_dst_shape_rc=None, to_bk_dxdy=None, to_fwd_dxdy=None):
    """
    Warp xy points using M and/or bk_dxdy/fwd_dxdy. If bk_dxdy is provided, it will be inverted to  create fwd_dxdy

    Parameters
    ----------
    geom : shapely.geometery
        Shapely geom to warp

    M : ndarray, optional
         3x3 affine transformation matrix to perform rigid warp

    transformation_src_shape_rc : (int, int)
        Shape of image that was used to find the transformation.
        For example, this could be the original image in which features were detected

    transformation_dst_shape_rc : (int, int), optional
        Shape of the image with shape `transformation_src_shape_rc` after warping.
        This could be the shape of the original image after applying `M`.

    src_shape_rc : optional, (int, int)
        Shape of the image from which the points originated. For example,
        this could be a larger/smaller version of the image that was
        used for feature detection.

    dst_shape_rc : optional, (int, int)
        Shape of image (with shape `src_shape_rc`) after warping

    bk_dxdy : ndarray, pyvips.Image
        (2, N, M) numpy array of pixel displacements in the x and y
        directions from the reference image. dx = bk_dxdy[0],
        and dy=bk_dxdy[1]. If `bk_dxdy` is not None, but
        `fwd_dxdy` is None, then `bk_dxdy` will be inverted to warp `xy`.

    fwd_dxdy : ndarray, pyvips.Image
        Inverse of bk_dxdy. dx = fwd_dxdy[0], and dy=fwd_dxdy[1].
        This is what is actually used to warp the points.

    pt_buffer : int
        If `bk_dxdy` or `fwd_dxdy` are pyvips.Image object, then
        pt_buffer` determines the size of the window around the point used to
        get the local displacements.

    Returns
    -------
    warped_geom : shapely.geom
       Warped `geom`

    """

    warp_kwargs = {"from_M": from_M,
                   "from_transformation_src_shape_rc": from_transformation_src_shape_rc,
                   "from_transformation_dst_shape_rc": from_transformation_dst_shape_rc,
                   "from_src_shape_rc": from_src_shape_rc,
                   "from_dst_shape_rc":from_dst_shape_rc,
                   "from_bk_dxdy":from_bk_dxdy,
                   "from_fwd_dxdy":from_fwd_dxdy,
                   "to_M":to_M,
                   "to_transformation_src_shape_rc": to_transformation_src_shape_rc,
                   "to_transformation_dst_shape_rc": to_transformation_dst_shape_rc,
                   "to_src_shape_rc": to_src_shape_rc,
                   "to_dst_shape_rc": to_dst_shape_rc, "to_bk_dxdy": to_bk_dxdy,
                   "to_fwd_dxdy":to_fwd_dxdy}

    warped_geom = _warp_shapely(geom, warp_tools.warp_xy_from_to, warp_kwargs)

    return warped_geom

def warp_geojson_from_to(geojson_f, annotation_slide, to_slide_obj, src_slide_level=0, src_pt_level=0,
                        dst_slide_level=0, non_rigid=True):
    """Warp geoms in geojson file from annotation slide to another unwarped slide

    Takes a set of geometries found in this annotation slide, and warps them to
    their position in the unwarped "to" slide.

    Parameters
    ----------
    geojson_f : str
        Path to geojson file containing the annotation geometries

    to_slide_obj : Slide
        Slide to which the points will be warped. I.e. `xy`
        will be warped from this Slide to their position in
        the unwarped slide associated with `to_slide_obj`.

    src_pt_level: int, tuple, optional
        Pyramid level of the slide/image in which `xy` originated.
        For example, if `xy` are from the centroids of cell segmentation
        performed on the unwarped full resolution image, this should be 0.
        Alternatively, the value can be a tuple of the image's shape (row, col)
        from which the points came. For example, if `xy` are  bounding
        box coordinates from an analysis on a lower resolution image,
        then pt_level is that lower resolution image's shape (row, col).

    dst_slide_level: int, tuple, optional
        Pyramid level of the slide/image in to `xy` will be warped.
        Similar to `src_pt_level`, if `dst_slide_level` is an int then
        the points will be warped to that pyramid level. If `dst_slide_level`
        is the "to" image's shape (row, col), then the points will be warped
        to their location in an image with that same shape.

    non_rigid : bool, optional
        Whether or not to conduct non-rigid warping. If False,
        then only a rigid transformation will be applied.

    """

    if np.issubdtype(type(src_pt_level), np.integer):
        src_pt_dim_rc = annotation_slide.slide_dimensions_wh[src_pt_level][::-1]
    else:
        src_pt_dim_rc = np.array(src_pt_level)

    if np.issubdtype(type(dst_slide_level), np.integer):
        to_slide_src_shape_rc = to_slide_obj.slide_dimensions_wh[dst_slide_level][::-1]
    else:
        to_slide_src_shape_rc = np.array(dst_slide_level)

    if src_slide_level != 0:
        if np.issubdtype(type(src_slide_level), np.integer):
            aligned_slide_shape = annotation_slide.val_obj.get_aligned_slide_shape(src_slide_level)
        else:
            aligned_slide_shape = np.array(src_slide_level)
    else:
        aligned_slide_shape = annotation_slide.aligned_slide_shape_rc

    if non_rigid:
        src_fwd_dxdy = annotation_slide.fwd_dxdy
        dst_bk_dxdy = to_slide_obj.bk_dxdy

    else:
        src_fwd_dxdy = None
        dst_bk_dxdy = None

    with open(geojson_f) as f:
        annotation_geojson = json.load(f)

    warped_features = [None]*len(annotation_geojson["features"])
    for i, ft in tqdm.tqdm(enumerate(annotation_geojson["features"])):
        geom = shapely.geometry.shape(ft["geometry"])
        warped_geom = warp_shapely_geom_from_to(geom=geom,
                                        from_M=annotation_slide.M,
                                        from_transformation_dst_shape_rc=annotation_slide.reg_img_shape_rc,
                                        from_transformation_src_shape_rc=annotation_slide.processed_img_shape_rc,
                                        from_dst_shape_rc=aligned_slide_shape,
                                        from_src_shape_rc=src_pt_dim_rc,
                                        from_fwd_dxdy=src_fwd_dxdy,
                                        to_M=to_slide_obj.M,
                                        to_transformation_src_shape_rc=to_slide_obj.processed_img_shape_rc,
                                        to_transformation_dst_shape_rc=to_slide_obj.reg_img_shape_rc,
                                        to_src_shape_rc=to_slide_src_shape_rc,
                                        to_dst_shape_rc=aligned_slide_shape,
                                        to_bk_dxdy=dst_bk_dxdy
                                        )

        warped_ft = deepcopy(ft)
        warped_ft["geometry"] = shapely.geometry.mapping(warped_geom)
        warped_features[i] = warped_ft

    warped_geojson = {"type":annotation_geojson["type"], "features":warped_features}

    return warped_geojson

slide_src_dir = "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/MS077L"
results_dst_dir = "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/MS077L"
annotation_dir = "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/MS077L"
annotation_geojson_f = "MS077_SC_L_40X.geojson"
annotation_img_f = "MS077_SC_L_TMEM119_40X.svs"
target_img_f_Palmgren = "MS077_SC_L_Palmgren_40X.svs"
target_img_f_PLP = "MS077_SC_L_PLP_40X.svs"
target_img_f_PGM1 = "MS077_SC_L_PGM1_40X.svs"
target_img_f_IBA1 = "MS077_SC_L_IBA1_40X.svs"
warped_geojson_annotation_f_Palmgren = f"{valtils.get_name(target_img_f_Palmgren)}_annotations.geojson"
warped_geojson_annotation_f_PLP = f"{valtils.get_name(target_img_f_PLP)}_annotations.geojson"
warped_geojson_annotation_f_PGM1 = f"{valtils.get_name(target_img_f_PGM1)}_annotations.geojson"
warped_geojson_annotation_f_IBA1 = f"{valtils.get_name(target_img_f_IBA1)}_annotations.geojson"

# Perform registration
registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=annotation_img_f, align_to_reference=True)
rigid_registrar, non_rigid_registrar, error_df = registrar.register()

# Transfer annotation
annotation_geojson_f = os.path.join(annotation_dir, "MS077_SC_L_40X.geojson")
annotation_source_slide = registrar.get_slide(annotation_img_f)
target_slide_Palmgren = registrar.get_slide(target_img_f_Palmgren)
target_slide_PLP = registrar.get_slide(target_img_f_PLP)
target_slide_PGM1 = registrar.get_slide(target_img_f_PGM1)
target_slide_IBA1 = registrar.get_slide(target_img_f_IBA1)

warped_geojson_Palmgren = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_Palmgren)
warped_geojson_PLP = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_PLP)
warped_geojson_PGM1 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_PGM1)
warped_geojson_IBA1 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_IBA1)

# Save annotation, which can be dragged and dropped into QuPath
with open(warped_geojson_annotation_f_Palmgren, 'w') as f:
   json.dump(warped_geojson_Palmgren, f)

with open(warped_geojson_annotation_f_PLP, 'w') as f:
   json.dump(warped_geojson_PLP, f)

with open(warped_geojson_annotation_f_PGM1, 'w') as f:
   json.dump(warped_geojson_PGM1, f)

with open(warped_geojson_annotation_f_IBA1, 'w') as f:
   json.dump(warped_geojson_IBA1, f)

As always, thank you for all your help!

cdgatenbee commented 1 year ago

Hi @abadgerw, Apologies for the delay in getting back to you. Based on these images, it looks like the "problem" (at least in this case) is just that due to the large differences in staining intensity, it becomes a bit more of a challenging registration problem, as the images look pretty different to the computer. What seems to be happening is that valis isn't finding many good feature matches between PLP and the other images, and so the estimated transform wasn't very good. Fortunately, one way this can be addressed is by trying some different preprocessing methods that can make the images look more similar. I played around a bit, and modified the BgColorDistance image processor (in valis.preprocessing), and it seems to work pretty well for these samples. Here is the original overlap image

Input_original_overlap

and the non-rigid overlap image: Input_non_rigid_overlap

The code to implement the custom image processor is:

from skimage import exposure
import numpy as np
from valis import preprocessing

class PreProcessor(preprocessing.ImageProcesser):
    """Preprocess images for registration

    Uses distance to background color and also subtracts the background intensity

    """

    def __init__(self, image, src_f, level, series, *args, **kwargs):
        super().__init__(image=image, src_f=src_f, level=level,
                         series=series, *args, **kwargs)

    def create_mask(self):
        _, tissue_mask = preprocessing.create_tissue_mask_from_rgb(self.image)

        return tissue_mask

    def process_image(self, brightness_q=0.99, *args, **kwargs):

        processed_img, cam = preprocessing.calc_background_color_dist(self.image, brightness_q=brightness_q)
        processed_img = exposure.rescale_intensity(processed_img, in_range="image", out_range=(0, 1))

        # Subtract background
        brightest_thresh = np.quantile(cam[..., 0], 0.9)
        brightest_idx = np.where(cam[..., 0] >= brightest_thresh)
        brightest_pixels = processed_img[brightest_idx]
        brightest_rgb = np.median(brightest_pixels, axis=0)
        no_bg = processed_img - brightest_rgb
        no_bg = np.clip(no_bg, 0, 255)

        # Adjust range and perform adaptive histogram equalization
        no_bg = (255*exposure.equalize_adapthist(no_bg/no_bg.max())).astype(np.uint8)

        return no_bg

To use the PreProcessor, simply pass it into the registration method as the brightfield_processing_cls argument, so that it becomes the object that pre-processes the IHC images :

rigid_registrar, non_rigid_registrar, error_df = registrar.register(brightfield_processing_cls=PreProcessor)

It's hard to know if this will work with the remaining samples, but please give it a shot and let me know how it goes.

Best, -Chandler

P.S. Happy New Year!

abadgerw commented 1 year ago

Thanks, @cdgatenbee! Happy New Year! I'll try this for the other cases and see how it goes.

One additional question:

I have some annotations that are boxes of a certain size and notice that they are no longer that consistent size after annotation transfer. Is this because I'm using both a rigid and non-rigid registration? Would the boxes stay the consistent size if I only did a rigid registration? However, I assume the registration would not be as good this way?

cdgatenbee commented 1 year ago

Hi @abadgerw, Yes, that's correct, the boxes' width/height ratio should stay consistent with just rigid registration, but the alignment may be less accurate since there won't be any way to "fix" small scale deformations, like stretching/compression in some regions of the tissue. One can get a sense of how much the non-rigid registration (hopefully) improved the alignment by comparing the "mean_rigid_D" and "mean_non_rigid_D" columns in the summary csv file (located in the "data" directory created during registration). Also, if you'd like to see how the rigidly transformed annotations look, you can warp the annotations using only the rigid transforms by setting non_rigid=False in the warp_geojson_from_to function above. It should be pretty quick and you would be able to determine if the loss in registration accuracy is acceptable or not.

Best, -Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! I used the updated code with pre-processing on a larger set of images and ran into two separate errors that I wanted to get your take on:

Error 1

For one case (inputs here: https://drive.google.com/drive/folders/1lY-Xp9f76iV1jM8Qm16IOjvPeHb8QX4R?usp=sharing) I got the following error and no outputs:

Traceback (most recent call last): File "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/Scripts/Priority/TMEM119_Missing_CD68/VALIS.py", line 317, in warped_geojson_Palmgren = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_Palmgren) File "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/Scripts/Priority/TMEM119_Missing_CD68/VALIS.py", line 250, in warp_geojson_from_to warped_geom = warp_shapely_geom_from_to(geom=geom, File "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/Scripts/Priority/TMEM119_Missing_CD68/VALIS.py", line 174, in warp_shapely_geom_from_to warped_geom = _warp_shapely(geom, warp_tools.warp_xy_from_to, warp_kwargs) File "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/Scripts/Priority/TMEM119_Missing_CD68/VALIS.py", line 99, in _warp_shapely return type(geom)([_warp_shapely(part, warp_fxn, warp_kwargs) for part in geom.geoms]) AttributeError: 'Polygon' object has no attribute 'geoms'. Did you mean: '_geom'?

I've attached the full log file here as well: https://drive.google.com/drive/folders/1zTo6SyVoeoZHsTFaDqKJT21WyH3a58qB?usp=sharing

Error 2:

For quite a few cases (inputs here for one example: https://drive.google.com/drive/folders/1PdQcZyTaSAb2E7HsNgdT2tYQAUqQuYCa?usp=sharing) I got outputs but the annotations didn't map well and the following error:

======== Calculating transformations

/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/valtils.py:75: UserWarning: OpenCV(4.6.0) /io/opencv/modules/calib3d/src/fundam.cpp:385: error: (-28:Unknown error code -28) The input arrays should have at least 4 corresponding point sets to calculate Homography in function 'findHomography'

warnings.warn(warning_msg, warning_type) Traceback (most recent call last): File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/registration.py", line 3373, in register rigid_registrar = self.rigid_register() File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/registration.py", line 2420, in rigid_register rigid_registrar = serial_rigid.register_images(self.processed_dir, File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/serial_rigid.py", line 1531, in register_images registrar.update_match_dicts_with_neighbor_filter(transformer, matcher) File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/serial_rigid.py", line 802, in update_match_dicts_with_neighbor_filter self.neighbor_match_filtering(img_obj, prev_img_obj, File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/serial_rigid.py", line 762, in neighbor_match_filtering matcher_obj.match_images(desc1=filtered_desc, File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/feature_matcher.py", line 904, in match_images match_desc_and_kp(desc1, kp1_xy, desc2, kp2_xy, File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/feature_matcher.py", line 593, in match_desc_and_kp filter_matches(matched_kp1_xy, matched_kp2_xy, filter_method, all_filtering_kwargs) File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/feature_matcher.py", line 313, in filter_matches filtered_src_points, filtered_dst_points, good_idx = filter_fxn(**all_matching_args) File "/sysapps/cluster/software/Anaconda3/2022.05/envs/valis-1.0.0rc12/lib/python3.10/site-packages/valis/feature_matcher.py", line 116, in filter_matchesransac , mask = cv2.findHomography(kp1_xy, kp2_xy, cv2.RANSAC, ransac_val) cv2.error: OpenCV(4.6.0) /io/opencv/modules/calib3d/src/fundam.cpp:385: error: (-28:Unknown error code -28) The input arrays should have at least 4 corresponding point sets to calculate Homography in function 'findHomography'

JVM has been killed. If this was due to an error, then a new Python session will need to be started

I've attached the full log file here as well: https://drive.google.com/drive/folders/1ar6Ce-ZrFfMsPD-WDVp6hKVNd8E6k40f?usp=sharing

Any potential insights would be much appreciated!

As always, thanks for your help!

cdgatenbee commented 1 year ago

Hi @abadgerw, I'll start looking into this. Thanks so much for sharing the files to debug :)

Best, -Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! I really appreciate it! Do not hesitate to let me know if you need anything else on my end.

abadgerw commented 1 year ago

@cdgatenbee I just wanted to see if you had any luck with troubleshooting? Happy to pass along any additional data, etc that may be useful. Thanks again for all your help!

cdgatenbee commented 1 year ago

Hi @abadgerw,

Sorry, have had a few pressing deadlines, and so have not had a chance to work on this issue :( Hope to sometime this week though. I'll keep you posted.

Best, -Chandler

cdgatenbee commented 1 year ago

Hi @abadgerw,

Thanks again for reporting these issues and sharing the images to recreate them. Regarding the first example, I did run into an error, but it was a different one. In my case, there was problem warping one of the annotations because it was considered empty, and so there weren't actually any points to warp. I've updated the code to address that, and it'll be in the next update. After fixing that, I didn't get any more errors, but I was using the most recent version of valis, which includes the annotation warping method. If you were using the function from above, maybe there was an error there that got fixed in the last update?

Regarding the 2nd examples, based on the error it looks like valis wasn't getting a good rigid registration because it couldn't find enough good feature matches. But when I ran it using the most recent version of valis and the PreProcessor object above I got what looked like pretty decent results. The transferred annotations looked like this:

example_2_transferred

Below is the code I used to register the images and warp the annotations:

registrar = registration.Valis(src_dir, res_dir)
rigid_registrar, non_rigid_registrar, error_df = registrar.register(brightfield_processing_cls=PreProcessor)

# Warp annotations
annotation_f = "MS018_SC_T_40X.geojson"
warped_anno_dir = "warped_annotations"
annotation_slide_name = "MS018_SC_T_TMEM119_40X"
annotation_slide = registrar.get_slide(annotation_slide_name)
for slide_name, slide_obj in registrar.slide_dict.items():
    if slide_name == annotation_slide_name:
        continue

    transferred_anno = annotation_slide.warp_geojson_from_to(annotation_f, slide_obj)
    warped_geojson_annotation_f = os.path.join(warped_anno_dir, f"{slide_name}_annotation.json")
    with open(warped_geojson_annotation_f, 'w') as f:
        json.dump(transferred_anno, f)

Above you had said the annotations didn't map well, but do these look better?

Best, -Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! I was still using the old version that was put on our cluster. Therefore, I think these issues may be due to that? I'll ask for the version to be updated and see how that goes.

abadgerw commented 1 year ago

@cdgatenbee The new version has been installed on our cluster but I think I may be confused on how to amend the previous code and what aspects need to be called. I was originally running the following with the previous version and using a shell script to allow for looping across images:

import numpy as np
import json
import os
import pathlib
import shapely
import tqdm
from copy import deepcopy

from skimage import exposure
import numpy as np
from valis import preprocessing

class PreProcessor(preprocessing.ImageProcesser):
    """Preprocess images for registration

    Uses distance to background color and also subtracts the background intensity

    """

    def __init__(self, image, src_f, level, series, *args, **kwargs):
        super().__init__(image=image, src_f=src_f, level=level,
                         series=series, *args, **kwargs)

    def create_mask(self):
        _, tissue_mask = preprocessing.create_tissue_mask_from_rgb(self.image)

        return tissue_mask

    def process_image(self, brightness_q=0.99, *args, **kwargs):

        processed_img, cam = preprocessing.calc_background_color_dist(self.image, brightness_q=brightness_q)
        processed_img = exposure.rescale_intensity(processed_img, in_range="image", out_range=(0, 1))

        # Subtract background
        brightest_thresh = np.quantile(cam[..., 0], 0.9)
        brightest_idx = np.where(cam[..., 0] >= brightest_thresh)
        brightest_pixels = processed_img[brightest_idx]
        brightest_rgb = np.median(brightest_pixels, axis=0)
        no_bg = processed_img - brightest_rgb
        no_bg = np.clip(no_bg, 0, 255)

        # Adjust range and perform adaptive histogram equalization
        no_bg = (255*exposure.equalize_adapthist(no_bg/no_bg.max())).astype(np.uint8)

        return no_bg

from valis import registration, valtils, warp_tools

def clip_xy(xy, shape_rc):
    """Clip xy coordintaes to be within image

    """
    clipped_x =  np.clip(xy[:, 0], 0, shape_rc[1])
    clipped_y =  np.clip(xy[:, 1], 0, shape_rc[0])

    clipped_xy = np.dstack([clipped_x, clipped_y])[0]
    return clipped_xy

def _warp_shapely(geom, warp_fxn, warp_kwargs):
    """Warp a shapely geometry
    Based on shapely.ops.trasform

    """
    if "dst_shape_rc" in warp_kwargs:
        dst_shape_rc = warp_kwargs["dst_shape_rc"]
    elif "to_dst_shape_rc" in warp_kwargs:
        dst_shape_rc = warp_kwargs["to_dst_shape_rc"]
    else:
        dst_shape_rc  = None

    if geom.geom_type in ("Point", "LineString", "LinearRing", "Polygon"):
        if geom.geom_type in ("Point", "LineString", "LinearRing"):
            warped_xy = warp_fxn(np.vstack(geom.coords), **warp_kwargs)
            if dst_shape_rc is not None:
                warped_xy = clip_xy(warped_xy, dst_shape_rc)

            return type(geom)(warped_xy.tolist())

        elif geom.geom_type == "Polygon":
            shell_xy = warp_fxn(np.vstack(geom.exterior.coords), **warp_kwargs)
            if dst_shape_rc is not None:
                shell_xy = clip_xy(shell_xy, dst_shape_rc)
            shell = type(geom.exterior)(shell_xy.tolist())
            holes = []
            for ring in geom.interiors:
                holes_xy = warp_fxn(np.vstack(ring.coords), **warp_kwargs)
                if dst_shape_rc is not None:
                    holes_xy = clip_xy(holes_xy, dst_shape_rc)

                holes.append(type(ring)(holes_xy))

            return type(geom)(shell, holes)

    elif geom.geom_type.startswith("Multi") or geom.geom_type == "GeometryCollection":
        return type(geom)([_warp_shapely(part, warp_fxn, warp_kwargs) for part in geom.geoms])
    else:
        raise shapely.errors.GeometryTypeError(f"Type {geom.geom_type!r} not recognized")

def warp_shapely_geom_from_to(geom, from_M=None, from_transformation_src_shape_rc=None,
                   from_transformation_dst_shape_rc=None, from_src_shape_rc=None,
                   from_dst_shape_rc=None,from_bk_dxdy=None, from_fwd_dxdy=None,
                   to_M=None, to_transformation_src_shape_rc=None,
                   to_transformation_dst_shape_rc=None, to_src_shape_rc=None,
                   to_dst_shape_rc=None, to_bk_dxdy=None, to_fwd_dxdy=None):
    """
    Warp xy points using M and/or bk_dxdy/fwd_dxdy. If bk_dxdy is provided, it will be inverted to  create fwd_dxdy

    Parameters
    ----------
    geom : shapely.geometery
        Shapely geom to warp

    M : ndarray, optional
         3x3 affine transformation matrix to perform rigid warp

    transformation_src_shape_rc : (int, int)
        Shape of image that was used to find the transformation.
        For example, this could be the original image in which features were detected

    transformation_dst_shape_rc : (int, int), optional
        Shape of the image with shape `transformation_src_shape_rc` after warping.
        This could be the shape of the original image after applying `M`.

    src_shape_rc : optional, (int, int)
        Shape of the image from which the points originated. For example,
        this could be a larger/smaller version of the image that was
        used for feature detection.

    dst_shape_rc : optional, (int, int)
        Shape of image (with shape `src_shape_rc`) after warping

    bk_dxdy : ndarray, pyvips.Image
        (2, N, M) numpy array of pixel displacements in the x and y
        directions from the reference image. dx = bk_dxdy[0],
        and dy=bk_dxdy[1]. If `bk_dxdy` is not None, but
        `fwd_dxdy` is None, then `bk_dxdy` will be inverted to warp `xy`.

    fwd_dxdy : ndarray, pyvips.Image
        Inverse of bk_dxdy. dx = fwd_dxdy[0], and dy=fwd_dxdy[1].
        This is what is actually used to warp the points.

    pt_buffer : int
        If `bk_dxdy` or `fwd_dxdy` are pyvips.Image object, then
        pt_buffer` determines the size of the window around the point used to
        get the local displacements.

    Returns
    -------
    warped_geom : shapely.geom
       Warped `geom`

    """

    warp_kwargs = {"from_M": from_M,
                   "from_transformation_src_shape_rc": from_transformation_src_shape_rc,
                   "from_transformation_dst_shape_rc": from_transformation_dst_shape_rc,
                   "from_src_shape_rc": from_src_shape_rc,
                   "from_dst_shape_rc":from_dst_shape_rc,
                   "from_bk_dxdy":from_bk_dxdy,
                   "from_fwd_dxdy":from_fwd_dxdy,
                   "to_M":to_M,
                   "to_transformation_src_shape_rc": to_transformation_src_shape_rc,
                   "to_transformation_dst_shape_rc": to_transformation_dst_shape_rc,
                   "to_src_shape_rc": to_src_shape_rc,
                   "to_dst_shape_rc": to_dst_shape_rc, "to_bk_dxdy": to_bk_dxdy,
                   "to_fwd_dxdy":to_fwd_dxdy}

    warped_geom = _warp_shapely(geom, warp_tools.warp_xy_from_to, warp_kwargs)

    return warped_geom

def warp_geojson_from_to(geojson_f, annotation_slide, to_slide_obj, src_slide_level=0, src_pt_level=0,
                        dst_slide_level=0, non_rigid=True):
    """Warp geoms in geojson file from annotation slide to another unwarped slide

    Takes a set of geometries found in this annotation slide, and warps them to
    their position in the unwarped "to" slide.

    Parameters
    ----------
    geojson_f : str
        Path to geojson file containing the annotation geometries

    to_slide_obj : Slide
        Slide to which the points will be warped. I.e. `xy`
        will be warped from this Slide to their position in
        the unwarped slide associated with `to_slide_obj`.

    src_pt_level: int, tuple, optional
        Pyramid level of the slide/image in which `xy` originated.
        For example, if `xy` are from the centroids of cell segmentation
        performed on the unwarped full resolution image, this should be 0.
        Alternatively, the value can be a tuple of the image's shape (row, col)
        from which the points came. For example, if `xy` are  bounding
        box coordinates from an analysis on a lower resolution image,
        then pt_level is that lower resolution image's shape (row, col).

    dst_slide_level: int, tuple, optional
        Pyramid level of the slide/image in to `xy` will be warped.
        Similar to `src_pt_level`, if `dst_slide_level` is an int then
        the points will be warped to that pyramid level. If `dst_slide_level`
        is the "to" image's shape (row, col), then the points will be warped
        to their location in an image with that same shape.

    non_rigid : bool, optional
        Whether or not to conduct non-rigid warping. If False,
        then only a rigid transformation will be applied.

    """

    if np.issubdtype(type(src_pt_level), np.integer):
        src_pt_dim_rc = annotation_slide.slide_dimensions_wh[src_pt_level][::-1]
    else:
        src_pt_dim_rc = np.array(src_pt_level)

    if np.issubdtype(type(dst_slide_level), np.integer):
        to_slide_src_shape_rc = to_slide_obj.slide_dimensions_wh[dst_slide_level][::-1]
    else:
        to_slide_src_shape_rc = np.array(dst_slide_level)

    if src_slide_level != 0:
        if np.issubdtype(type(src_slide_level), np.integer):
            aligned_slide_shape = annotation_slide.val_obj.get_aligned_slide_shape(src_slide_level)
        else:
            aligned_slide_shape = np.array(src_slide_level)
    else:
        aligned_slide_shape = annotation_slide.aligned_slide_shape_rc

    if non_rigid:
        src_fwd_dxdy = annotation_slide.fwd_dxdy
        dst_bk_dxdy = to_slide_obj.bk_dxdy

    else:
        src_fwd_dxdy = None
        dst_bk_dxdy = None

    with open(geojson_f) as f:
        annotation_geojson = json.load(f)

    warped_features = [None]*len(annotation_geojson["features"])
    for i, ft in tqdm.tqdm(enumerate(annotation_geojson["features"])):
        geom = shapely.geometry.shape(ft["geometry"])
        warped_geom = warp_shapely_geom_from_to(geom=geom,
                                        from_M=annotation_slide.M,
                                        from_transformation_dst_shape_rc=annotation_slide.reg_img_shape_rc,
                                        from_transformation_src_shape_rc=annotation_slide.processed_img_shape_rc,
                                        from_dst_shape_rc=aligned_slide_shape,
                                        from_src_shape_rc=src_pt_dim_rc,
                                        from_fwd_dxdy=src_fwd_dxdy,
                                        to_M=to_slide_obj.M,
                                        to_transformation_src_shape_rc=to_slide_obj.processed_img_shape_rc,
                                        to_transformation_dst_shape_rc=to_slide_obj.reg_img_shape_rc,
                                        to_src_shape_rc=to_slide_src_shape_rc,
                                        to_dst_shape_rc=aligned_slide_shape,
                                        to_bk_dxdy=dst_bk_dxdy
                                        )

        warped_ft = deepcopy(ft)
        warped_ft["geometry"] = shapely.geometry.mapping(warped_geom)
        warped_features[i] = warped_ft

    warped_geojson = {"type":annotation_geojson["type"], "features":warped_features}

    return warped_geojson

if __name__ == "__main__":

    import sys

    print(sys.argv, file=sys.stderr)
    if (len(sys.argv) < 3):
        print("""ERROR: 2 arguments needed.
Usage: VALIS_modified.py <working directory> <file prefix>
Exiting.""", file=sys.stderr)
        sys.exit(1)

    ## FIRST ARG slide, results, annotation dir
    slide_src_dir = sys.argv[1]
    results_dst_dir = sys.argv[1]
    annotation_dir = sys.argv[1]

    ## SECOND ARG file prefix
    prefix = sys.argv[2]

    annotation_geojson_f = prefix + "_40X.geojson"
    annotation_img_f = prefix + "_TMEM119_40X.svs"
    target_img_f_Palmgren = prefix + "_Palmgren_40X.svs"
    target_img_f_PLP = prefix + "_PLP_40X.svs"
    target_img_f_PGM1 = prefix + "_PGM1_40X.svs"
    target_img_f_IBA1 = prefix + "_IBA1_40X.svs"
    target_img_f_P2Y12 = prefix + "_P2Y12_40X.svs"

    warped_geojson_annotation_f_Palmgren = f"{valtils.get_name(target_img_f_Palmgren)}_annotations.geojson"
    warped_geojson_annotation_f_PLP = f"{valtils.get_name(target_img_f_PLP)}_annotations.geojson"
    warped_geojson_annotation_f_PGM1 = f"{valtils.get_name(target_img_f_PGM1)}_annotations.geojson"
    warped_geojson_annotation_f_IBA1 = f"{valtils.get_name(target_img_f_IBA1)}_annotations.geojson"
    warped_geojson_annotation_f_P2Y12 = f"{valtils.get_name(target_img_f_P2Y12)}_annotations.geojson"

    # Perform registration
    registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=annotation_img_f, align_to_reference=True)
    rigid_registrar, non_rigid_registrar, error_df = registrar.register(brightfield_processing_cls=PreProcessor)

    # Transfer annotation
    annotation_geojson_f = os.path.join(annotation_dir, annotation_geojson_f)
    annotation_source_slide = registrar.get_slide(annotation_img_f)
    target_slide_Palmgren = registrar.get_slide(target_img_f_Palmgren)
    target_slide_PLP = registrar.get_slide(target_img_f_PLP)
    target_slide_PGM1 = registrar.get_slide(target_img_f_PGM1)
    target_slide_IBA1 = registrar.get_slide(target_img_f_IBA1)
    target_slide_P2Y12 = registrar.get_slide(target_img_f_P2Y12)

    warped_geojson_Palmgren = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_Palmgren)
    warped_geojson_PLP = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_PLP)
    warped_geojson_PGM1 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_PGM1)
    warped_geojson_IBA1 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_IBA1)
    warped_geojson_P2Y12 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_P2Y12)

    # Save annotation, which can be dragged and dropped into QuPath
    with open(warped_geojson_annotation_f_Palmgren, 'w') as f:
        json.dump(warped_geojson_Palmgren, f)

    with open(warped_geojson_annotation_f_PLP, 'w') as f:
        json.dump(warped_geojson_PLP, f)

    with open(warped_geojson_annotation_f_PGM1, 'w') as f:
        json.dump(warped_geojson_PGM1, f)

    with open(warped_geojson_annotation_f_IBA1, 'w') as f:
        json.dump(warped_geojson_IBA1, f)

    with open(warped_geojson_annotation_f_P2Y12, 'w') as f:
        json.dump(warped_geojson_P2Y12, f)

However, in the new version, given the code you wrote above, I thought I could remove all the upstream code that defines the preprocessing and warping. So I ran:

import numpy as np
import json
import os
import pathlib
import shapely
import tqdm
from copy import deepcopy

from skimage import exposure
import numpy as np
from valis import preprocessing, registration, valtils, warp_tools

if __name__ == "__main__":

    import sys

    print(sys.argv, file=sys.stderr)
    if (len(sys.argv) < 3):
        print("""ERROR: 2 arguments needed.
Usage: VALIS_modified.py <working directory> <file prefix>
Exiting.""", file=sys.stderr)
        sys.exit(1)

    ## FIRST ARG slide, results, annotation dir
    slide_src_dir = sys.argv[1]
    results_dst_dir = sys.argv[1]
    annotation_dir = sys.argv[1]

    ## SECOND ARG file prefix
    prefix = sys.argv[2]

    annotation_geojson_f = prefix + "_40X.geojson"
    annotation_img_f = prefix + "_PLP_40X.svs"
    target_img_f_Palmgren = prefix + "_Palmgren_40X.svs"
    target_img_f_TMEM119 = prefix + "_TMEM119_40X.svs"
    target_img_f_PGM1 = prefix + "_PGM1_40X.svs"
    target_img_f_IBA1 = prefix + "_IBA1_40X.svs"
    target_img_f_P2Y12 = prefix + "_P2Y12_40X.svs"

    warped_geojson_annotation_f_Palmgren = f"{valtils.get_name(target_img_f_Palmgren)}_annotations.geojson"
    warped_geojson_annotation_f_TMEM119 = f"{valtils.get_name(target_img_f_TMEM119)}_annotations.geojson"
    warped_geojson_annotation_f_PGM1 = f"{valtils.get_name(target_img_f_PGM1)}_annotations.geojson"
    warped_geojson_annotation_f_IBA1 = f"{valtils.get_name(target_img_f_IBA1)}_annotations.geojson"
    warped_geojson_annotation_f_P2Y12 = f"{valtils.get_name(target_img_f_P2Y12)}_annotations.geojson"

    # Perform registration
    registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=annotation_img_f, align_to_reference=True)
    rigid_registrar, non_rigid_registrar, error_df = registrar.register(brightfield_processing_cls=PreProcessor)

    # Transfer annotation
    annotation_geojson_f = os.path.join(annotation_dir, annotation_geojson_f)
    annotation_source_slide = registrar.get_slide(annotation_img_f)
    target_slide_Palmgren = registrar.get_slide(target_img_f_Palmgren)
    target_slide_TMEM119 = registrar.get_slide(target_img_f_TMEM119)
    target_slide_PGM1 = registrar.get_slide(target_img_f_PGM1)
    target_slide_IBA1 = registrar.get_slide(target_img_f_IBA1)
    target_slide_P2Y12 = registrar.get_slide(target_img_f_P2Y12)

    warped_geojson_Palmgren = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_Palmgren)
    warped_geojson_TMEM119 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_TMEM119)
    warped_geojson_PGM1 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_PGM1)
    warped_geojson_IBA1 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_IBA1)
    warped_geojson_P2Y12 = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_P2Y12)

    # Save annotation, which can be dragged and dropped into QuPath
    with open(warped_geojson_annotation_f_Palmgren, 'w') as f:
        json.dump(warped_geojson_Palmgren, f)

    with open(warped_geojson_annotation_f_TMEM119, 'w') as f:
        json.dump(warped_geojson_TMEM119, f)

    with open(warped_geojson_annotation_f_PGM1, 'w') as f:
        json.dump(warped_geojson_PGM1, f)

    with open(warped_geojson_annotation_f_IBA1, 'w') as f:
        json.dump(warped_geojson_IBA1, f)

    with open(warped_geojson_annotation_f_P2Y12, 'w') as f:
        json.dump(warped_geojson_P2Y12, f)

If I do that, I get the following messages/errors:

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

NameError: name 'PreProcessor' is not defined

Any insight into how I can adjust my code to make it compatible with the new version?

abadgerw commented 1 year ago

Update @cdgatenbee: If I add back in the code for preprocessing I don't get the PreProcessor error.

I do still get the warning:

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

I then get the following:

warped_geojson_Palmgren = warp_geojson_from_to(annotation_geojson_f, annotation_source_slide, target_slide_Palmgren) NameError: name 'warp_geojson_from_to' is not defined

I assume this is because the internal code in the new version is slightly different than the old function? I wasn't sure how to make the correct adjustments to what I was doing since I wasn't originally using the for loop and slide_obj calls in your most recent example.

abadgerw commented 1 year ago

Last update @cdgatenbee (sorry!).

This is what I ultimately came up with that ran but wanted to get your take on whether this seemed appropriate:


import numpy as np
import json
import os
import pathlib
import shapely
import tqdm
from copy import deepcopy

from skimage import exposure
import numpy as np
from valis import preprocessing

class PreProcessor(preprocessing.ImageProcesser):
    """Preprocess images for registration

    Uses distance to background color and also subtracts the background intensity

    """

    def __init__(self, image, src_f, level, series, *args, **kwargs):
        super().__init__(image=image, src_f=src_f, level=level,
                         series=series, *args, **kwargs)

    def create_mask(self):
        _, tissue_mask = preprocessing.create_tissue_mask_from_rgb(self.image)

        return tissue_mask

    def process_image(self, brightness_q=0.99, *args, **kwargs):

        processed_img, cam = preprocessing.calc_background_color_dist(self.image, brightness_q=brightness_q)
        processed_img = exposure.rescale_intensity(processed_img, in_range="image", out_range=(0, 1))

        # Subtract background
        brightest_thresh = np.quantile(cam[..., 0], 0.9)
        brightest_idx = np.where(cam[..., 0] >= brightest_thresh)
        brightest_pixels = processed_img[brightest_idx]
        brightest_rgb = np.median(brightest_pixels, axis=0)
        no_bg = processed_img - brightest_rgb
        no_bg = np.clip(no_bg, 0, 255)

        # Adjust range and perform adaptive histogram equalization
        no_bg = (255*exposure.equalize_adapthist(no_bg/no_bg.max())).astype(np.uint8)

        return no_bg

from valis import registration, valtils, warp_tools

if __name__ == "__main__":

    import sys

    print(sys.argv, file=sys.stderr)
    if (len(sys.argv) < 3):
        print("""ERROR: 2 arguments needed.
Usage: VALIS_modified.py <working directory> <file prefix>
Exiting.""", file=sys.stderr)
        sys.exit(1)

    ## FIRST ARG slide, results, annotation dir
    slide_src_dir = sys.argv[1]
    results_dst_dir = sys.argv[1]
    annotation_dir = sys.argv[1]

    ## SECOND ARG file prefix
    prefix = sys.argv[2]

    annotation_geojson_f = prefix + "_40X.geojson"
    annotation_img_f = prefix + "_TMEM119_40X.svs"
    target_img_f_Palmgren = prefix + "_Palmgren_40X.svs"
    target_img_f_PLP = prefix + "_PLP_40X.svs"
    target_img_f_PGM1 = prefix + "_PGM1_40X.svs"
    target_img_f_IBA1 = prefix + "_IBA1_40X.svs"
    target_img_f_P2Y12 = prefix + "_P2Y12_40X.svs"

    warped_geojson_annotation_f_Palmgren = f"{valtils.get_name(target_img_f_Palmgren)}_annotations.geojson"
    warped_geojson_annotation_f_PLP = f"{valtils.get_name(target_img_f_PLP)}_annotations.geojson"
    warped_geojson_annotation_f_PGM1 = f"{valtils.get_name(target_img_f_PGM1)}_annotations.geojson"
    warped_geojson_annotation_f_IBA1 = f"{valtils.get_name(target_img_f_IBA1)}_annotations.geojson"
    warped_geojson_annotation_f_P2Y12 = f"{valtils.get_name(target_img_f_P2Y12)}_annotations.geojson"

    # Perform registration
    registrar = registration.Valis(slide_src_dir, results_dst_dir, reference_img_f=annotation_img_f, align_to_reference=True)
    rigid_registrar, non_rigid_registrar, error_df = registrar.register(brightfield_processing_cls=PreProcessor)

    # Transfer annotation
    annotation_geojson_f = os.path.join(annotation_dir, annotation_geojson_f)
    annotation_source_slide = registrar.get_slide(annotation_img_f)
    target_slide_Palmgren = registrar.get_slide(target_img_f_Palmgren)
    target_slide_PLP = registrar.get_slide(target_img_f_PLP)
    target_slide_PGM1 = registrar.get_slide(target_img_f_PGM1)
    target_slide_IBA1 = registrar.get_slide(target_img_f_IBA1)
    target_slide_P2Y12 = registrar.get_slide(target_img_f_P2Y12)

    warped_geojson_Palmgren = annotation_source_slide.warp_geojson_from_to(annotation_geojson_f, target_slide_Palmgren)
    warped_geojson_PLP = annotation_source_slide.warp_geojson_from_to(annotation_geojson_f, target_slide_PLP)
    warped_geojson_PGM1 = annotation_source_slide.warp_geojson_from_to(annotation_geojson_f, target_slide_PGM1)
    warped_geojson_IBA1 = annotation_source_slide.warp_geojson_from_to(annotation_geojson_f, target_slide_IBA1)
    warped_geojson_P2Y12 = annotation_source_slide.warp_geojson_from_to(annotation_geojson_f, target_slide_P2Y12)

    # Save annotation, which can be dragged and dropped into QuPath
    with open(warped_geojson_annotation_f_Palmgren, 'w') as f:
        json.dump(warped_geojson_Palmgren, f)

    with open(warped_geojson_annotation_f_PLP, 'w') as f:
        json.dump(warped_geojson_PLP, f)

    with open(warped_geojson_annotation_f_PGM1, 'w') as f:
        json.dump(warped_geojson_PGM1, f)

    with open(warped_geojson_annotation_f_IBA1, 'w') as f:
        json.dump(warped_geojson_IBA1, f)

    with open(warped_geojson_annotation_f_P2Y12, 'w') as f:
        json.dump(warped_geojson_P2Y12, f)

I do still get the warning though:

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

cdgatenbee commented 1 year ago

Hi @abadgerw, Sorry, I had forgotten to mention the way annotation warping is done after the update is a bit different than the first example I put together, in that the annotation Slide is what warps the points, not a separate function. So the last example looks great :) I'm not sure about the SLF4J error though. Does it still run OK?

Best, -Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! After a quick check, all seems to run even with the SLF4J warning. The only issue is the same slide (example 1 above). I now get the following error:

Traceback (most recent call last): File "/hpcdata/li_mdis/li_mdis_data/waldmanad/VALIS/Scripts/Priority/TMEM119_Missing_CD68/VALIS.py", line 94, in warped_geojson_Palmgren = annotation_source_slide.warp_geojson_from_to(annotation_geojson_f, target_slide_Palmgren) File "/sysapps/cluster/software/Anaconda3/2022.10/envs/valis-1.0.0rc13/lib/python3.10/site-packages/valis/registration.py", line 1338, in warp_geojson_from_to warped_geom = warp_tools.warp_shapely_geom_from_to(geom=geom, File "/sysapps/cluster/software/Anaconda3/2022.10/envs/valis-1.0.0rc13/lib/python3.10/site-packages/valis/warp_tools.py", line 2551, in warp_shapely_geom_from_to warped_geom = _warp_shapely(geom, warp_xy_from_to, warp_kwargs) File "/sysapps/cluster/software/Anaconda3/2022.10/envs/valis-1.0.0rc13/lib/python3.10/site-packages/valis/warp_tools.py", line 2382, in _warp_shapely shell_xy = warp_fxn(np.vstack(geom.exterior.coords), **warp_kwargs) File "<__array_function__ internals>", line 180, in vstack File "/sysapps/cluster/software/Anaconda3/2022.10/envs/valis-1.0.0rc13/lib/python3.10/site-packages/numpy/core/shape_base.py", line 282, in vstack return _nx.concatenate(arrs, 0) File "<__array_function__ internals>", line 180, in concatenate ValueError: need at least one array to concatenate

I'm confused because when I open the geojson file I'm trying to warp in QuPath (in the google drive), it has annotations (image attached).

Screen Shot 2023-03-06 at 4 18 44 PM

Any insight into why they may not be registering when trying to warp?

cdgatenbee commented 1 year ago

Hi @abadgerw, Yes, I ran into this issue too, which seems to occur because a few of the annotations are actually empty, and so don't have any coordinates to warp. I think it's probably the last 3 annotations, which appear to be empty rectangles. image

I added a check to handle this, but I haven't pushed the fix yet because there are some other things I'd like to include in the update. In the meantime, will it work if you maybe re-export the annotations without the empty rectangles?

Best, -Chandler

abadgerw commented 1 year ago

That fixed it! Thanks, @cdgatenbee!

I went back through and found one last example of the same error as (number 2) listed above that persists: https://drive.google.com/drive/folders/1PdQcZyTaSAb2E7HsNgdT2tYQAUqQuYCa?usp=sharing

I think this may be due to the fact that the image with the geojson is rotated ~180 degrees from the others (FYI the geojson corresponds to the PLP image in this example)? I tried transferring the annotation to all other images using the updated code but I think the malrotation is causing it to throw the findHomography error?

I also found an example of a section that ran without error, but on inspection had issues with some sections that were rotated: https://drive.google.com/drive/folders/1QaBYxYRR6G0pbBpaZABZOVLnZ7wUExKV?usp=sharing

Here is the original image (FYI the geojson corresponds to the TMEM119 image in this example):

Screen Shot 2023-03-06 at 11 35 19 PM

Here is one section that transferred well that isn't rotated:

Screen Shot 2023-03-06 at 11 36 13 PM

And here is one section that didn't transfer well that is rotated:

Screen Shot 2023-03-06 at 11 36 30 PM

Lastly, I went back to the example from above in which you got the following results: Res

I noticed mine are slightly different (namely the outer edge in this image):

Screen Shot 2023-03-07 at 1 03 10 AM

I thought it could be because I am setting align_to_reference=True as I didn't see that in your call above but when I changed that in my script I didn't see a change. Not sure why my output would look different if I am using the new version and my script should be similar?

cdgatenbee commented 1 year ago

Hi @abadgerw, I think you're right that the difference might be due to my example not using align_to_reference=True. I also wasn't getting the Homography error when I leave that option as False. It can sometimes work out better with align_to_reference=False, because valis will align each image to it's most similar image, which increases the chances the registration will be successful (i.e. no homography errors). Forcing an alignment directly to the reference can improve the alignment of images to the reference image (A), but sometimes (like in these examples) an image (B) won't easily align to the reference and then we get the homography errors. But that same image, B, could align to another image, C, and C may align to the reference, so then B will end up aligning to the reference (because it's aligned to C, which aligned to A). That explanation is a little messy, but hopefully it makes sense :) As an alternative, you could set align_to_reference=False, but then do a "micro-registration" step to align each image to reference (see below). EDIT: I think I found a bug that may cause the micro-registration to crash if max_non_rigid_registration_dim_px is large, but it's an easy fix that will be in the next update.

Regarding the image that didn't transfer well, I was able to reproduce the issue, although for some reason my troublesome image was PLP. In my case, the annotations were way off (it looked like they shrank), but it was because the rigid registration wasn't good, which could be seen in the overlap images. I was able to fix it by setting max_processed_image_dim_px=500, (default is 850) which ended up working out, I think because it focused the registration on the overall tissue shape, as opposed to internal details. The results look like this:

anno_example4

And the complete code is

import pathlib
import os
import json
from valis import registration, slide_io, valtils, warp_tools, preprocessing

from skimage import exposure
import numpy as np

class PreProcessor(preprocessing.ImageProcesser):
    """Preprocess images for registration

    Uses distance to background color and also subtracts the background intensity

    """

    def __init__(self, image, src_f, level, series, *args, **kwargs):
        super().__init__(image=image, src_f=src_f, level=level,
                         series=series, *args, **kwargs)

    def create_mask(self):
        _, tissue_mask = preprocessing.create_tissue_mask_from_rgb(self.image)

        return tissue_mask

    def process_image(self, brightness_q=0.99, *args, **kwargs):

        processed_img, cam = preprocessing.calc_background_color_dist(self.image, brightness_q=brightness_q)
        processed_img = exposure.rescale_intensity(processed_img, in_range="image", out_range=(0, 1))

        # Subtract background
        brightest_thresh = np.quantile(cam[..., 0], 0.9)
        brightest_idx = np.where(cam[..., 0] >= brightest_thresh)
        brightest_pixels = processed_img[brightest_idx]
        brightest_rgb = np.median(brightest_pixels, axis=0)
        no_bg = processed_img - brightest_rgb
        no_bg = np.clip(no_bg, 0, 255)

        # Adjust range and perform adaptive histogram equalization
        no_bg = (255*exposure.equalize_adapthist(no_bg/no_bg.max())).astype(np.uint8)

        return no_bg

annotation_slide_name = "MS317_SC_C_TMEM119_40X"
registrar = registration.Valis(src_dir, res_dir, max_processed_image_dim_px=500)
rigid_registrar, non_rigid_registrar, error_df = registrar.register(brightfield_processing_cls=PreProcessor)

# Un-comment out to do micro-registration to refine non-rigid alignment to the reference image. May or may not make an improvement
# registrar.register_micro(align_to_reference=True, brightfield_processing_cls=PreProcessor)

warped_anno_dir = os.path.join(res_dir,registrar.name, "warped_annotations")
pathlib.Path(warped_anno_dir).mkdir(exist_ok=True, parents=True)

annotation_slide = registrar.get_slide(annotation_slide_name)
for slide_name, slide_obj in registrar.slide_dict.items():
    if slide_name == annotation_slide_name:
        continue

    transferred_anno = annotation_slide.warp_geojson_from_to(annotation_f, slide_obj)
    warped_geojson_annotation_f = os.path.join(warped_anno_dir, f"{slide_name}_annotation.json")
    with open(warped_geojson_annotation_f, 'w') as f:
        json.dump(transferred_anno, f)

kill_jvm()

Maybe try that script and let me know if it works for you too.

Best, -Chandler

zindy commented 1 year ago

Hi Chandler,

I've adapted your last code for a project I'm currently working on and it's working brilliantly.

I wrote a QuPath script some time ago for exporting all my QuPath project annotations as geojson files (extension .ndpi.geojson modelled on .ndpi.ndpa) and was then able to warp the annotations made on my AE1 images onto CD45 and SMA matched sections.

It's breast tissue as well, so there is quite a bit of deformation between the sections, but the "VALIS warped" annotations are spot on! For this project, doing it that way also makes more sense than warping the actual images.

Thanks mate :-)

cdgatenbee commented 1 year ago

Hi @zindy, so happy to see valis is working out for you! And thanks to @abadgerw for making this feature request and sharing the data, it's turning out to be pretty handy :)

-Chandler

abadgerw commented 1 year ago

@cdgatenbee, all has been working great with this (thanks again!). I just had one question: we will be generating new images on a different slide scanner as our scanner is quite old. The only issue is that the new images will have a different magnification and resolution (20x with 0.22um/pixel resolution while the images that have the annotations we'd like to transfer were drawn at 40x with 0.252um/pixel resolution). Is there a way in which we can account for this while doing the annotation transfer?

As always, thanks for your help!

cdgatenbee commented 1 year ago

Hi @abadgerw, Really happy to hear things are working out! I think this should work out ok without having to do anything extra, since valis will use the new WSI's sizes to scale the annotation points to the presumably smaller images. Also, since valis scales the images to have similar sizes during the registration, mixing 20x and 40x images should also work ok, for transferring annotations at least. I think the main issue would be if you wanted to warp the WSI themselves, as the 20x ones would have get upscaled to 40x, or 40x downscaled to 20x. If that needed to be done, you can ensure 40x gets downscaled to 20x by setting the reference image to be a 20x image, or have 20x upscaled to 40x by having the reference image be a 40x image. Having said all of that, I haven't actually tried this before, so if doesn't work out as expected please let me know and I'll see if I can figure out a way to get it to work.

Best, -Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! I'll test it out when I get some of the new images and report back.