bkntr / napari-brainways

Brainways UI plugin for napari
GNU General Public License v3.0
6 stars 0 forks source link

Exporting brain regions mask #8

Open paulacauhy opened 9 months ago

paulacauhy commented 9 months ago

Hi,

I'm currently using Brainways to align my mouse section to the reference atlas and I'm hoping to export the transformed image and the atlas with the annotated brain regions to then use both for other downstream applications. I can't find in the GUI a way to export anything without having to go through the cell detection analysis. Is there any way to export the transformed image with the atlas region annotations without going through the next steps in the GUI?

Also, when importing cell detections, will their coordinates be corrected for the new transformed tissue image?

Thank you!

bkntr commented 9 months ago

Currently there is no way to export the transformed image with the atlas region annotations directly from the GUI. It is possible to do it with a python script. I'm writing here something quick, didn't test it, but it should give you a good direction:

from brainways.project.brainways_project import BrainwaysProject
import numpy as np

project = BrainwaysProject.open("your_project.bwp")
for subject in project.subjects:
    for _, document in subject.valid_documents:
        image = subject.read_lowres_image(document)
        transformed_image = project.pipeline.transform_image(image, document.params)
        annotation = np.array(project.pipeline.get_atlas_slice(document.params).annotation)

Let me know if this helps. I will look to add an option to export these directly from the GUI, thank you for your suggestion!

Regarding importing cell detections, yes, their coordinates will be corrected for the transformed tissue.

paulacauhy commented 9 months ago

Hi, thanks for replying! I tested the script above but had an error with the project.pipeline.transform_image(). It seems that this method is not implemented in the BrainwaysTransform base class. Are there any alternatives for this? Or any clue to when this will be implemented?

Getting the 'annotation' worked fine, but what is the best way to convert the AtlasSlice to a numpy array ? Wondering so I can use it for other applications.

Thanks in advance for your help!

bkntr commented 9 months ago

To turn the AtlasSlice to numpy array, use np.array(atlas_slice.annotation) (I edited my previous comment to reflect that). The method project.pipeline.transform_image() is a method of BrainwaysPipeline, not of BrainwaysTransform. What is the error that you are getting?

paulacauhy commented 9 months ago

Thanks again! I pasted the error message below.

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
Cell In[3], line 4
      2 for _, document in subject.valid_documents:
      3     image = subject.read_lowres_image(document)
----> 4     transformed_image = project.pipeline.transform_image(image, document.params)
      5     annotation = np.array(project.pipeline.get_atlas_slice(document.params).annotation)

File ~\anaconda3\envs\brainways\lib\site-packages\brainways\pipeline\brainways_pipeline.py:97, in BrainwaysPipeline.transform_image(self, image, params, until_step, scale)
     85 transform = self.get_image_to_atlas_transform(
     86     brainways_params=params,
     87     lowres_image_size=image.shape,
     88     until_step=until_step,
     89     scale=scale,
     90 )
     92 output_size = (
     93     int(self.atlas.shape[1] * scale),
     94     int(self.atlas.shape[2] * scale),
     95 )
---> 97 transformed_image = transform.transform_image(
     98     image=image, output_size=output_size
     99 )
    101 if image.dtype == np.uint8:
    102     transformed_image = convert_to_uint8(transformed_image)

File ~\anaconda3\envs\brainways\lib\site-packages\brainways\transforms\image_to_atlas_transform.py:46, in ImageToAtlasTransform.transform_image(self, image, output_size, mode)
     39 def transform_image(
     40     self,
     41     image: np.ndarray,
     42     output_size: ImageSizeHW | None = None,
     43     mode: str = "bilinear",
     44 ) -> np.ndarray:
     45     for transform in self.transforms:
---> 46         image = transform.transform_image(image, output_size=output_size, mode=mode)
     48     return image

File ~\anaconda3\envs\brainways\lib\site-packages\brainways\transforms\base.py:15, in BrainwaysTransform.transform_image(self, image, output_size, mode)
      9 def transform_image(
     10     self,
     11     image: np.ndarray,
     12     output_size: ImageSizeHW | None = None,
     13     mode: str = "bilinear",
     14 ) -> np.ndarray:
---> 15     raise NotImplementedError()

NotImplementedError: 
bkntr commented 8 months ago

Sorry for the late reply! Here is a code I checked and is working (I forgot to add parameter):

from brainways.project.brainways_project import BrainwaysProject
from brainways.pipeline.brainways_pipeline import PipelineStep
import numpy as np
import matplotlib.pyplot as plt

project = BrainwaysProject.open("your_project.bwp")
for subject in project.subjects:
    for _, document in subject.valid_documents:
        image = subject.read_lowres_image(document)
        transformed_image = project.pipeline.transform_image(
            image, document.params, until_step=PipelineStep.TPS
        )
        annotation = np.array(
            project.pipeline.get_atlas_slice(document.params).annotation
        )

        # display transformed image and annotation side by side using matplotlib
        fig, axes = plt.subplots(1, 2)
        axes[0].imshow(transformed_image)
        axes[1].imshow(annotation)
        plt.show()