Open postnubilaphoebus opened 2 months ago
Hello,
I can't try your example now.
You can convert data between Smil and Numpy using
See https://smil.cmm.minesparis.psl.eu/doc/p660.html
Also, watershed algorithms aren't surely not the same in Smil and Scikit Image. Try with a simpler image.
Regards
José-Marcio
On 8/27/24 20:17, Laurids Stockert wrote:
Hello again,
When using the Python wrapper for the watershed algorithm, I would like to obtain the labels as an npy file. Unfortunately, it is not intuitive to me how to transform data between the smil Image datatype and numpy arrays, as well as how to save the labels from the watershed algorithm after it is finished. Below, I have attached an MWE. You may load your own 3d image:
import numpy as np import skimage.io from skimage.morphology import local_minima from scipy.ndimage import label import smilPython as sp import os
|# Convert array to 8-bit unsigned integer format def convert_to_numpy_uint8(data): """Normalize and convert data to 8-bit unsigned integer format.""" data = data.astype(np.float64) / data.max() # Normalize to [0, 1] data = 255 data # Scale to [0, 255] return data.astype(np.uint8) # Convert array to 16-bit unsigned integer format def convert_to_numpy_uint16(data): """Normalize and convert data to 16-bit unsigned integer format.""" data = data.astype(np.float64) / data.max() # Normalize to [0, 1] data = 65535 data # Scale to [0, 65535] return data.astype(np.uint16) def process_image(prediction, output_directory, output_filename): """Process the image and save the result with watershed segmentation.""" # Normalize and invert the prediction inverted_inference = 1.0 - (prediction
- prediction.min()) / (prediction.max() - prediction.min()) # Detect local minima in the inverted inference local_minima_mask = local_minima(inverted_inference) foreground_mask = inverted_inference > 0.4 # Assuming a threshold value of 0.4 for the foreground adjusted_minima = local_minima_mask * foreground_mask # Label the local minima labeled_minima, num_features = label(adjusted_minima) # Convert arrays to the appropriate format for SMIL processing inverted_inference_uint8 = convert_to_numpy_uint8(inverted_inference) labeled_minima_uint16 = convert_to_numpy_uint16(labeled_minima) # Create SMIL Image objects smil_inverted_inference = sp.Image(inverted_inference_uint8) smil_labeled_minima = sp.Image(labeled_minima_uint16) # Compute gradient smil_gradient = sp.Image(inverted_inference_uint8) sp.gradient(smil_inverted_inference, smil_gradient) # Create watershed segmentation smil_watershed = sp.Image(inverted_inference_uint8) smil_basins_out = sp.Image(inverted_inference_uint8) sp.watershed(smil_gradient, smil_labeled_minima, smil_watershed, smil_basins_out) # Retrieve the watershed result watershed_result = smil_basins_out.getNumpyArray() # Save the result to file if not os.path.exists(output_directory): os.makedirs(output_directory) output_filepath = os.path.join(output_directory, f"{output_filename}.npy") np.save(output_filepath, watershed_result) # Print results print("Maximum value in watershed result:", watershed_result.max()) if name == "main": # Load the image image_path = "/path/to/your/image.tif" img = skimage.io.imread(image_path) # Define output directory and filename output_dir = "/path/to/output/directory" output_file = "segmentation_result" # Process the image process_image(img, output_dir, output_file) |
When I do this for my image, I get a maximum label of 65,535. However, I find this very unlikely, as the number of labeled instances I get through scipy.ndimage.label is around 40,000 (the watershed algorithm by scikit image also only gives me around 40,000 instances). How should I correctly save this? Also, as you can see, choosing UINT16 as a datatype here can cause problems. If I had only a few thousand more instances in my image, the labels wouldn't work anymore.
— Reply to this email directly, view it on GitHub https://github.com/MinesParis-MorphoMath/smil/issues/3, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQE4AG5YS2OOQJUHJ2RPCLZTS7FDAVCNFSM6AAAAABNGVVOPOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ4TAMBUG44DAOA. You are receiving this because you are subscribed to this thread.Message ID: @.***>
--
Hello,
Thanks for your quick reply. I fixed the issue, my conversion to uint16 was incorrect. Simply doing labeled_minima.astype(np.uint16) works, the scaling I do results in floating point errors.
Best regards, Laurids
Hi,
Some hints...
It's possible to use UINT32 data types but for hardware or OS limitations, some functions won't work.
Why integers and not float ? MMorph theory is based on lattices and integer are best suited to handle lattices. Another reason is that operations with integers are faster than with floats.
On 8/28/24 10:35, Laurids Stockert wrote:
Hello,
Thanks for your quick reply. I fixed the issue, my conversion to uint16 was incorrect. Simply doing labeled_minima.astype(np.uint16) works, the scaling I do results in floating point errors.
Best regards, Laurids
— Reply to this email directly, view it on GitHub https://github.com/MinesParis-MorphoMath/smil/issues/3#issuecomment-2314683972, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQE4AHQUOOXNR4ECWGZGE3ZTWDVFAVCNFSM6AAAAABNGVVOPOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJUGY4DGOJXGI. You are receiving this because you commented.Message ID: @.***>
--
Thank you for the hints. I believe I create all images before calling a function in my code. I have type uint16 for the basins (imbasinsout) and the labels/seeding points (with indices between 0 (background) and max_label, which is the highest label). Everything else is uint8. In the screenshots, you can see the result of smil (first image) and the result of skimage (second image). Both use the same seeding points calculated with the local_minima function. The screenshots are views of xy, xz, and yz of the 3d volume, the colors indicate cell ids. What could I be missing here?
Which image correspond to what ? And the three images : what are they ?
On 8/28/24 10:59, Laurids Stockert wrote:
Screenshot.from.2024-08-28.10-43-44.png (view on web) https://github.com/user-attachments/assets/2ac90f6b-a3cd-445f-958d-28809e988834 Screenshot.from.2024-08-28.10-44-42.png (view on web) https://github.com/user-attachments/assets/0a1ba8d6-55b2-4a0e-b4f1-f56756faff5e Thank you for the hints. I believe I create all images before calling a function in my code. I have type uint16 for the basins (imbasinsout) and the labels (with indices between 0 (background) and max_label, which is the highest label). Everything else is uint8. In the screenshots, you can see the result of smil and the result of skimage. Both use the same seeding points calculated with the local_minima function. What could I be missing here?
— Reply to this email directly, view it on GitHub https://github.com/MinesParis-MorphoMath/smil/issues/3#issuecomment-2314744043, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQE4AF4DBNYIVZA5IOJVN3ZTWGPDAVCNFSM6AAAAABNGVVOPOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJUG42DIMBUGM. You are receiving this because you commented.Message ID: @.***>
--
All of the images are labeled cells coming from confocal microscopy, with the grey image in the background being the microscopy image, and the colors representing the labels. There are six images if you go from top to bottom, divided into two screenshots. The first three (first screenshot) are three different views of the watershed results of SLIM visualized in different slices of the 3d volume. So the first image is an xy slice, the second one an xz slice, and the third one a yz slice. The second screenshot visualizes the segmentation results using the watershed function provided by skimage. These are the last three images. Similarly, the first of these is an xy slice, the second an xz slice, and the third a yz slice. To visualize different cell ids, they are colored differently. It seems to me that there might be an issue in the order in which the result is read from SLIM back to numpy (eg fortran or C in memory), but converting to either did not help (eg np.asfortranarray(watershed_result) or np.ascontiguousarray(watershed_result).
I'm sending you a script to do watershed on 3D images, I used some time ago.
I don't remember how many labels we had in this image, but it was quite a lot.
On 8/28/24 14:58, Laurids Stockert wrote:
There are six images if you go from top to bottom, divided into two screenshots. The first three (first screenshot) are three different views of the watershed results of SLIM visualized in different slices of the 3d volume. So the first image is an xy slice, the second one an xz slice, and the third one a yz slice. The second screenshot visualizes the segmentation results using the watershed function provided by skimage. These are the last three images. Similarly, the first image is an xy slice, the second image an xz slice, and the third image a yz slice. To visualize different cell ids, they are colored differently. It seems to me that there might be an issue in the order in which the result is read from SLIM back to numpy (eg fortran or C in memory), but converting to either did not help (eg np.asfortranarray(watershed_result) or np.ascontiguousarray(watershed_result).
— Reply to this email directly, view it on GitHub https://github.com/MinesParis-MorphoMath/smil/issues/3#issuecomment-2315250654, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQE4ABTOURJT2UZVU6RBTTZTXCPLAVCNFSM6AAAAABNGVVOPOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJVGI2TANRVGQ. You are receiving this because you commented.Message ID: @.***>
--
I'm using the script I sent you, just a little modified to print some debug information and use 'UINT16' data type.
The result I see is OK for me.
This is a 512x512x512 image.
OBS; times are not very good as I'm running it on a near 10 years old computer.
(image) @.***:/export/data/CMM-sources/smil/users/MAT/Maxime$ ./water-it SMIL (Simple Morphological Image Library) 1.1.1-dev Copyright (c) 2011-2016, Matthieu FAESSEL and ARMINES Copyright (c) 2017-2024, CMM - Centre de Morphologie Mathematique All rights reserved.
3D image Data type: UINT8 Size: 512x512x512 Allocated (128MiB)
Gradient 0.1 s Minima 39.1 s Label 5.8 s Bassins 0.0 s Watershed 16.5 s Sup... 0.0 s 49856 regions 0.0 s (total) Type any key to quit
On 8/28/24 15:48, Jose-Marcio Martins da Cruz wrote:
I'm sending you a script to do watershed on 3D images, I used some time ago.
I don't remember how many labels we had in this image, but it was quite a lot.
On 8/28/24 14:58, Laurids Stockert wrote:
There are six images if you go from top to bottom, divided into two screenshots. The first three (first screenshot) are three different views of the watershed results of SLIM visualized in different slices of the 3d volume. So the first image is an xy slice, the second one an xz slice, and the third one a yz slice. The second screenshot visualizes the segmentation results using the watershed function provided by skimage. These are the last three images. Similarly, the first image is an xy slice, the second image an xz slice, and the third image a yz slice. To visualize different cell ids, they are colored differently. It seems to me that there might be an issue in the order in which the result is read from SLIM back to numpy (eg fortran or C in memory), but converting to either did not help (eg np.asfortranarray(watershed_result) or np.ascontiguousarray(watershed_result).
— Reply to this email directly, view it on GitHub https://github.com/MinesParis-MorphoMath/smil/issues/3#issuecomment-2315250654, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQE4ABTOURJT2UZVU6RBTTZTXCPLAVCNFSM6AAAAABNGVVOPOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJVGI2TANRVGQ. You are receiving this because you commented.Message ID: @.***>
--
Hi, it seems like the script was not attached to your message. Best, Laurids
Yes, it was but it seems that is was removed because .py files aren't supported. Renamed it to .txt
Noticed that I applied a opening of size 5 to make the number of labels to fall below 2^16 (64K). Without this, the labeling had 600K labels. Next releases will be compiled with UINT32 included by default.
I tried it your way, but I still have the same issues as before. The image shape is similar to what you used. I had a minor issue before where I forgot to threshold unimportant local minima to generate markers, so the number of markers is now correct. However, the segmentation result is still stripy in two views, as can be seen in the images above. Reading the array in a different order (C or Fortran) also did not solve the issue. The updated code I used is found here:
from skimage.morphology import local_minima
from scipy.ndimage import label
import smilPython as sp
def watershed_smil(img):
"""Process the image and save the result with watershed segmentation."""
se = sp.RhombicuboctahedronSE()
se = sp.Cross3DSE()
szSE = 1
# Invert image, define masks and label minima
inverted_img = 1.0 - ((img - img.min()) / (img.max() - img.min()))
local_minima_mask = local_minima(inverted_img)
foreground_mask = img > 0.35 # threshold unimportant minima
adjusted_minima = local_minima_mask * foreground_mask
background_mask = img > 0.15 # threshold watershed output
labeled_minima, num_features = label(adjusted_minima)
# Convert arrays to the appropriate format for SMIL processing
inverted_img_uint8 = inverted_img.astype(np.uint8)
labeled_minima_uint16 = labeled_minima.astype(np.uint16)
smil_inverted_img = sp.Image("UINT8")
smil_inverted_img.fromNumpyArray(inverted_img_uint8)
smil_labeled_minima = sp.Image("UINT16")
smil_labeled_minima.fromNumpyArray(labeled_minima_uint16)
# morphological opening
sp.open(smil_inverted_img, smil_inverted_img, sp.CubeSE(5))
# Compute gradient
smil_gradient = sp.Image("UINT8")
sp.gradient(smil_inverted_img, smil_gradient, se(szSE))
# Create watershed segmentation
smil_watershed = sp.Image("UINT8")
smil_basins_out = sp.Image("UINT16")
sp.watershed(smil_gradient, smil_labeled_minima, smil_watershed, smil_basins_out)
# Retrieve the watershed result
watershed_result = smil_basins_out.getNumpyArray()
watershed_result *= background_mask
np.save("smil.npy", watershed_result)
print("Maximum value in watershed result:", watershed_result.max())
I did not use SMIL's image class for loading because it cannot load certain filetypes. But the transfer from numpy should work without issues.
There is a bug in function fromNumpyArray. I need to find some time to dedicate to this.
Meanwhile, it seems that the function below works and it's better to use it to convert numpy to smil images : smil_image = sp.Image(numpy_image)
If this doesn't work, you can also write a function to convert image copying values voxel by voxel : a simple loop.
Also, in your function, I'd think you should use only smil functions to do watershed : label, gradient, basins, ...
On 9/4/24 15:19, Laurids Stockert wrote:
I tried it your way, but I still have the same issues as before. I had a minor issue before where I forgot to threshold unimportant local minima to generate markers, so the number of markers is now correct. However, the segmentation result is still stripy in two views, as can be seen in the images above. Reading the array in a different order (C or Fortran) also did not solve the issue. The updated code I used is found here:
|from skimage.morphology import local_minima from scipy.ndimage import label def watershed_smil(img): """Process the image and save the result with watershed segmentation.""" se = sp.RhombicuboctahedronSE() se = sp.Cross3DSE() szSE = 1 # Invert image, define masks and label minima inverted_img = 1.0 - ((img - img.min()) / (img.max() - img.min())) local_minima_mask = local_minima(inverted_img) foreground_mask = img > 0.35 # threshold unimportant minima adjusted_minima = local_minima_mask foreground_mask background_mask = img > 0.15 # threshold watershed output labeled_minima, num_features = label(adjusted_minima) # Convert arrays to the appropriate format for SMIL processing inverted_img_uint8 = inverted_img.astype(np.uint8) labeled_minima_uint16 = labeled_minima.astype(np.uint16) smil_inverted_img = sp.Image("UINT8") smil_inverted_img.fromNumpyArray(inverted_img_uint8) smil_labeled_minima = sp.Image("UINT16") smil_labeled_minima.fromNumpyArray(labeled_minima_uint16) # morphological opening sp.open(smil_inverted_img, smil_inverted_img, sp.CubeSE(5)) # Compute gradient smil_gradient = sp.Image("UINT8") sp.gradient(smil_inverted_img, smil_gradient, se(szSE)) # Create watershed segmentation smil_watershed = sp.Image("UINT8") smil_basins_out = sp.Image("UINT16") sp.watershed(smil_gradient, smil_labeled_minima, smil_watershed, smil_basins_out) # Retrieve the watershed result watershed_result = smil_basins_out.getNumpyArray() watershed_result = background_mask np.save("smil.npy", watershed_result) print("Maximum value in watershed result:", watershed_result.max()) |
I did not use SMIL's image class because it cannot load certain filetypes. But the transfer from numpy should work without issues.
— Reply to this email directly, view it on GitHub https://github.com/MinesParis-MorphoMath/smil/issues/3#issuecomment-2329025460, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQE4AHQAI5PMCOBI65LCELZU4CGFAVCNFSM6AAAAABNGVVOPOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRZGAZDKNBWGA. You are receiving this because you commented.Message ID: @.***>
--
Hello again,
When using the Python wrapper for the watershed algorithm, I would like to obtain the labels as an npy file. Unfortunately, it is not intuitive to me how to transform data between the smil Image datatype and numpy arrays, as well as how to save the labels from the watershed algorithm after it is finished. Below, I have attached an MWE. You may load your own 3d image:
When I do this for my image, I get a maximum label of 65,535. However, I find this very unlikely, as the number of labeled instances I get through scipy.ndimage.label is around 40,000 (the watershed algorithm by scikit image also only gives me around 40,000 instances). How should I correctly save this? Also, as you can see, choosing UINT16 as a datatype here can cause problems. If I had only a few thousand more instances in my image, the labels wouldn't work anymore.
Thank you again for providing this library, and I look forward to your reply.