vanvalenlab / deepcell-label

Cloud-based data annotation tools for biological images
https://label.deepcell.org
Other
77 stars 15 forks source link

CellSAM does not segment 3D organoid images and has limited functionality on the web app #529

Open UddeshyaPandey opened 1 year ago

UddeshyaPandey commented 1 year ago

Describe the bug I am interested in using CellSAM, the foundational model for cell segmentation, as described in your paper [A foundational model for cell segmentation]. However, when I uploaded my own 3D organoid images to the web app at [https://label-dev.deepcell.org/], the segmentation option did not appear in the drop-down menu on the top left corner. Additionally, the action button was mostly inactive when I used the example images provided by the web app.

To Reproduce Steps to reproduce the behavior:

  1. Upload 3D organoid images to the web app.
  2. Try to select the segmentation option from the drop-down menu on the top left corner.
  3. Try to use the action button with both the uploaded images and the example images.

Expected behavior I expected to be able to segment my images using CellSAM and edit, save, or download the segmented images using the action button. I also wanted to run CellSAM locally on my own machine, but I could not find the data and code for the model. I understand that the paper is not yet published, but I would appreciate it if you could share the resources or provide some instructions on how to use CellSAM. Screenshots

  1. Action Button is inactive with example 3d segmentation image image

  2. Here I load my data, it is an organoid image, the Featuiire Drop down does not appears, and the Action button is still inactive. CellSAM is not segmenting anything here. image

Thank you for your time and attention. I hope you can fix these issues or provide some guidance on how to use CellSAM effectively. I think it is a very promising model for cell segmentation and I would like to apply it to my own data.

rossbar commented 12 months ago

Hi @UddeshyaPandey , thanks for the feedback!

The current version deployed at label-dev.deepcell.org is a development deployment accompanying the preprint to allow folks to begin to experiment with their own data. A couple quick notes that may be useful:

  1. CellSAM does not currently support 3D spatial data (see section 3 of the preprint).
  2. The reason the button is greyed out is that the development deployment currently only supports single-channel images. There is a ton of ambiguity on what individual channels can represent (RGB, various markers, etc.) in cell imaging data. We're working on a convenient UI to allow users to select which channels are of interest and indicate what they represent, but for now we require users to do this preprocessing themselves. If you are able to collapse your input image down to a single channel that you think contains the relevant structural information, you should be able to get predictions as expected. Note also that if you hover over the greyed-out button, you should see a tooltip popup that explains the single-channel image requirement.
  3. An unfortunate consequence of the single channel limitation is that the current example images will not work with the CellSAM tab as all of the example images are multichannel. This is a silly oversight on my part - we'll work on getting some relevant CellSAM examples up ASAP.

Hopefully this helps! Please follow up as you run into other issues; your feedback is very welcome!