MouseLand / cellpose

a generalist algorithm for cellular segmentation with human-in-the-loop capabilities
https://www.cellpose.org/
BSD 3-Clause "New" or "Revised" License
1.4k stars 405 forks source link

Implement Counting within specified ROI? #2

Closed ElieG1234 closed 1 year ago

ElieG1234 commented 4 years ago

Is there any easy way to implement ROIs into the pipeline to segment within predefined areas? Thanks so much!

marius10p commented 4 years ago

That's an interesting idea, I think we can do that. Would it be enough if only one ROI was active at any one time?

ElieG1234 commented 4 years ago

Wonderfu, thank youl! Yes, I believe one active at a time would be fine. As long as each ROIs position can be saved and visualized while the other ROIs on the same image are active. That way we can be certain we are not counting the same areas twice.

cyf203 commented 4 years ago

Fantastic! I cannot agree more! I worked with mouse and rat brain a lot and often time we don't count neurons in one region but in multiple areas. So, I will be very useful if there is at least a basic version of ROI selections. And hopefully, we can manually define the shape of ROI.

marius10p commented 4 years ago

Would it be more useful to segment the whole image first, and then have a polygon ROI selection tool for counting? How large are the images that need to be analyzed like this?

cyf203 commented 4 years ago

I think either way is useful! As I am working with brain-wide cell counting, I usually count the whole coronal section of a rat brain and usually, the image size is about 21197 x 15975, 16-bit depth and maybe 2-3 channels (G, R, B). I know it sounds ridiculous. But, it would nice to explore the possibility.

ElieG1234 commented 4 years ago

At least for my purposes, the order of operations does not matter much. Whether it is segmented before ROI selection or after.

Thinking about this more, an important aspect will be allowing for some measurement or distance tool so that our ROIs are properly placed. In our instance we have histological sections of brain tissue with fiducial marks in them. We know that the ROIs we wish to count cells in are circles with a radius equal to half the distance between the fiducial marks. So as long as we can measure the distance between the fiducial marks within the GUI and apply that measurement to the selected ROI we will be set. This measurement can be in units of pixels, as that may be easiest.

Our images are similarly sized to @cyf203, as they are tiled images from a confocal. Sometimes a bit smaller like 10000 x 10000 16-bit depth, multiple channels

cyf203 commented 4 years ago

The other thing I wonder is if we have a way to draw a polygon ROI, then, how can we make sure ROI 1 and ROI 2 do not overlap if two areas are very close? Especially, if we do one ROI at a time. I am thinking maybe it is possible to have a way to register the previously selected ROIs coordinate and prohibit the user to select any area within the previous ROIs. And if the next selection is overlap with the previous one, then use the boundary of the last ROI for part of the new one.

marius10p commented 4 years ago

Given the very large image sizes, do you think this should be done manually or offline as part of a batch processing pipeline? Optimizing the GUI for such large images is not a big priority for Cellpose right now. We hope that functionality could be obtained via downstream software, like ImagePy and Napari @yxdragon, @sofroniewn.

yxdragon commented 4 years ago

"that functionality could be obtained via downstream software" agree it! (so is there any way to pip install cellpose without pyqt? it is large, but not useful for imagepy) Any features needed such as roi can pop an issue https://github.com/Image-Py/cellpose-plgs.

mbpalacio commented 4 years ago

Hi, I don't know anything about programming but i found this programe could be helpful for me. I have this Neural crest cells and this coding thing is the only that shapes them almost perfectly, but is there any way that the algorithm distinguish the protrusions at the free edges? like a length from the center of the nucleus and to set like a color code for it? ie. If the cell generates very large protrusions the area very close to the nucleus would be white and the more distant ones from the nucleus would be redish and so on. Another thing that I have been struggling with is the calibrating thing. For me it was more like Proof error way but in ImageJ i select the length of the nucleus and set that as 10 um but here i just have to guess the pixel-um ratio. there isn't such thing as the one in ImageJ?

marius10p commented 4 years ago

Thanks for the idea for how to calibrate the cell diameter, we'll do that.

Sounds like you are interested in some kind of image analytics downstream of segmentation. The quickest thing would be to load the segmentations into a jupyter notebook and do these analytics yourself (i.e. find cell center, compute distances from pixels to centers and make the corresponding image).

We are working to export Cellpose in other frameworks that can do other analyses, but it may be a while before the particular functionality you require is enabled in these downstream frameworks.

sofroniewn commented 4 years ago

A napari plugin for Cellpose sounds great - we're still working on defining the basics of our plugin infrastructure, but when that is done making one seems like high value!!

mrariden commented 1 year ago

Since there are now existing plugins for napari and qupath, I am closing this issue as complete