brainglobe / cellfinder

Automated 3D cell detection in very large images
https://brainglobe.info/documentation/cellfinder/index.html
BSD 3-Clause "New" or "Revised" License
173 stars 39 forks source link

[Feature] .czi support and single plane #169

Closed juliencarponcy closed 3 years ago

juliencarponcy commented 3 years ago

Is your feature request related to a problem? Please describe. I just wanted to try out cellfinder but it turns out i'm not sure, based on what i saw on the docs, if the particular image structure, and data formats can be used. I particularly want to know whether there is the possiblity of using it for single section tile-scanned. My best guess is no, as for the the docs:

"Image structure Although we hope to support more varied types of data soon, your images must currently:

I was also looking for precisions on what can be concretely done with the czi support in napari. Are you planning to deal with czi directly from cellfinder, i guess maybe already the case provided that the data structure fit the description above? Which would still make it unusable for confocal acquired images of single sections?

Describe the solution you'd like I wanted to have confirmation of the inability or ability of cellfinder / napari to use single section images. secondarily, if or when that will be possible, does the image absolutely need to be a z-stack? Will it, or can it only work on a single plane ? If this is or will be the case, I assume that optical sections might be best at 5um thick? If it absolutely requires different planes to work out cells properly, what is your recommandation for the best trade-off between image resolution (mostly in z-plane) and acquisition time on a confocal?

Describe alternatives you've considered I'm currently working on branching an extension of the SHARP-Track project (https://github.com/cortex-lab/allenCCF and https://github.com/petersaj/AP_histology). I just commited an extension there https://github.com/juliencarponcy/AP_histology. Then we are considering to use ROI detection on a third party like CellProfiler or StarDist and match ROI coordinates to transformed atlas coordinates of our sections. Your work integrate all these aspects in a single framework, I guess at the end it will be better, but right now looks best suited for whole-brain imaging with clearing techniques i guess.

Additional context No more context. Just trying to find out the easiest and quickest way and to inquire a bit about where you are.

adamltyson commented 3 years ago

Hi @juliencarponcy. At the moment, cellfinder is designed for whole-brain microscopy data. i.e. single 3D images, covering the entire brain, of the type you might get from serial two-photon or lightsheet microscopy in cleared tissue. This broadly makes cellfinder incompatible with confocal data, unless you have a very fast microscope, or a lot of time.

If data of this type is acquired as czi format, it would be easy enough to add support for that.

Regarding 2D data specifically. At the moment, cellfinder absolutely needs 3D data (i.e. each cell should be in more than one image plane). 5um spacing is optimal, but 10, and even 20 can work well (see preprint). I had always assumed that existing 2D solutions were sufficient, and cellfinder could concentrate on 3D data. However, there have been many people asking now for 2D cellfinder, so I will likely work on it, but I don't have a timescale yet. In the meantime, while both of the projects you linked are excellent, @nickdelgrosso is working on a 2D registration pipeline that will be better integrated with the rest of the BrainGlobe software (https://github.com/brainglobe/slicereg). slicereg will be initially compatible with QuPath feature detection, but my long term aim is to add compatiblity for 2D cellfinder cell detection, and other tools such as cellpose and stardist.

Basically, a long winded way of saying:

juliencarponcy commented 3 years ago

Thanks very much, that's what I thought.

Your perspectives are indeed helpful on this and related projects. Regarding contribution, i'm afraid I would be terribly slow and inefficient in Python so I'll stick to the other projects for now although I will look periodically at what's happening around here.

Best wishes,

Julien