MouseLand / cellpose

a generalist algorithm for cellular segmentation with human-in-the-loop capabilities
https://www.cellpose.org/
BSD 3-Clause "New" or "Revised" License
1.4k stars 403 forks source link

Expanding cellpose applicability by augmenting the current dataset #128

Closed kevinjohncutler closed 2 years ago

kevinjohncutler commented 4 years ago

A great deal of performance may be gained by manipulating images to be more 'like' other images on which cellpose has been trained. Below I'll give a direct comparison:

Original image:

image

Adjusted:

image

The actual transformation on the original image is rescale(1./(1+meannorm(im))), where meannorm adjusts the mean to 0.5 (the range of im prior was [0 1]) and rescale scales the result back to [0 1]. Curiously, if I do not rescale the input, the segmentation returned is slightly different:

image

As far as the rescaling goes, this could be an artifact of the online tool. The images above all happened to have a bit depth of 8 (the default in MATLAB it seems, at least when saving to PNG instead of TIFF). Saving explicitly with a bit depth of 16 appears to give the same result as the rescaled [0-255] image (labeled 'adjusted' above). I need to do more testing with a local instance to see if rescaling actually does matter.

However, I feel like cellpose can/should be made to work without manipulating the input images in this way. One easy solution would be to take the existing training set and perform various transformations on it (such as the inversion above) and re-train. This would vastly expand the set of image 'types' that cellpose can segment.

I wonder what set of transformations would be needed to fully generalize cellpose, in the sense that it would be insensitive to any per-pixel transformation of the image. Of course one could go the other route - find a way to automatically transform input images into one of the closest 'types' that cellpose can already handle well.

I also wonder if the dataset may be augmented to successfully segment slightly out-of-focus cells (due to distance from the optical axis, vibrations, and angled or irregular cell substrates). My idea would be to blur the existing dataset (uniformly or with some spatial dependence) and re-train. I see this sensitivity to out-of-focus regions conflicting with 3D segmentation, so even if this were successful, there would need to become two models: one for single-plane images and one for z-stacks.

b2jia commented 4 years ago

This is a really interesting post. I wonder if you can achieve the same "hack" by histogram matching the input image against the training set to achieve the same performance boost.

kevinjohncutler commented 4 years ago

Histogram matching is a really good idea. I have not tried that. I do have a few follow-ups to this now that I've had some time training the network on my datasets and learning about cellpose's architecture. First is that there is a layer that is supposed to be doing something like this automatically, choosing parameters based on the image "style". I'm still not sure what this entails and how it depends on the images it has been trained on. Second is that I never considered how some image transformations can lead to a loss in information. This leads me to my third and most important observation: cellpose might work a little better out-of-the-box with some image preprocessing such as those I have tested, but it actually does much better trained from scratch on your own raw data. I compared training on just raw images, just transformed images, and also both at the same time. It deserves a full quantitative treatment, but what I observed qualitatively based on test images is that the raw images always gave the best segmentation after training, even if the pretrained 'cyto' model did better on preprocessed images. This was surprising, but could have to do with this idea of information loss if you perform weird nonlinear transformations on your images. That being said, it may actually be desirable to train cellpose on noisy images to be more applicable to some dim fluorescence imaging modalities.

The big question I still have is how generalized can cellpose get? It may be useful to have a generalist and imperfect algorithm for a lot of applications, but for me, I arrived at the best network parameters by ignoring the pre-trained models. I plan on expanding the dataset to the same imaging modality but just different cell morphologies, but that is still generalizing within a relatively small niche - and so it might just be best have models specific to phase contrast microscopy, for example, at least when the application calls for extremely accurate cell boundaries.

carsen-stringer commented 4 years ago

I have not seen large improvements of segmentation with histogram equalization of images, so I haven't added it to the GUI. That said I've focused largely on trying to improve calcium imaging data segmentation (and that data is included in the training set in raw form). Thanks for sharing what you found, @kevinjohncutler .

In the example image you shared, I screen-grabbed the raw image and did a bit better by inverting the image before running it through the model (check-box in the GUI): image

Related to this, have you tried training the cyto model with inverted images? Still may be likely to do worse than your model trained only on your own data.

kevinjohncutler commented 4 years ago

Yes - I have tried the inverted image training both on the cyto model and from scratch, and performance is best in the latter case. I have also found that I can pool together all the transformed images to train a model that generalizes across all of the 'fake modalities' quite well. But, to reiterate the point above, nothing does as well as training and evaluation on unprocessed data (as far as really small details and tricky interfaces are concerned).

marius10p commented 4 years ago

We'll be releasing the training dataset soon, and then you can play with it. We already do a lot of augmentations using geometric image transformations. It might help to add some intensity transformations, but keep in mind every image is already intensity normalized as a preprocessing step. Intensity flips however might help more, and so could blurring and deblurring operations.

pskeshu commented 3 years ago

as someone who is very new to cellpose, can someone guide me as to how one might be able to train cellpose with their own images?

kevinjohncutler commented 3 years ago

as someone who is very new to cellpose, can someone guide me as to how one might be able to train cellpose with their own images?

I'm actively doing this myself. The GUI is designed as a tool for you to annotate your images efficiently using a mouse. The cellpose GUI has some documentation on the wiki. My personal workflow is to use photoshop and matlab (for pen input and programmatic file management, respectively), but I am happy to explain that if you are interested. The end result is the same: a directory of images and their corresponding label matrices. This is the directory you tell cellpose to train on using the CLI commands found in the wiki. I find it to work well to only have a hundred or so cells per image (for me that means dividing my images into 1/16ths) and then re-train the cyto model after you have a few of those images.

pskeshu commented 3 years ago

I wanted to know the overall process architecture or the pipeline or the mechanics that goes behind retraining a model. Some code examples would be very helpful.

kevinjohncutler commented 3 years ago

I wanted to know the overall process architecture or the pipeline or the mechanics that goes behind retraining a model. Some code examples would be very helpful.

Here is the link to the wiki page for running the training: https://cellpose.readthedocs.io/en/latest/train.html

Depending on the kind of segmentation you want to do, you would want to choose between the cytoplasmic or nuclear models as a starting point for training. Once you have enough ground-truth data, you may find you can get better results by training the network from scratch ('None').

pskeshu commented 3 years ago

Haha, so are you saying training from scratch is preferable to retraining cellpose models?

kevinjohncutler commented 3 years ago

Haha, so are you saying training from scratch is preferable to retraining cellpose models?

It can be. I would try both, and possibly even training with the original cellpose dataset combined with your own training data (if it is available for download yet). Re-training starting from the cyto or nuclear models is just a particular initialization for the network, but it could be that randomized weights allow you to converge to a better model for your particular data.

pskeshu commented 3 years ago

Okay. I've never trained anything more than MNIST before, so kinda nervous about that πŸ˜…

On Tue 16 Feb, 2021, 11:38 AM Kevin Cutler, notifications@github.com wrote:

Haha, so are you saying training from scratch is preferable to retraining cellpose models?

It can be. I would try both, and possibly even training with the original cellpose dataset combined with your own training data (if it is available for download yet). Re-training starting from the cyto or nuclear models is just a particular initialization for the network, but it could be that randomized weights allow you to converge to a better model for your particular data.

β€” You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MouseLand/cellpose/issues/128#issuecomment-779606739, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVNN4HOY6VCMRKHQLYCXLTS7IDVZANCNFSM4RZAWKMA .

kevinjohncutler commented 3 years ago

Okay. I've never trained anything more than MNIST before, so kinda nervous about that πŸ˜…

On Tue 16 Feb, 2021, 11:38 AM Kevin Cutler, notifications@github.com wrote:

Haha, so are you saying training from scratch is preferable to retraining cellpose models?

It can be. I would try both, and possibly even training with the original cellpose dataset combined with your own training data (if it is available for download yet). Re-training starting from the cyto or nuclear models is just a particular initialization for the network, but it could be that randomized weights allow you to converge to a better model for your particular data.

β€” You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MouseLand/cellpose/issues/128#issuecomment-779606739, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVNN4HOY6VCMRKHQLYCXLTS7IDVZANCNFSM4RZAWKMA .

Haha it is just as easy! But it will be slow if you don't have a decent NVIDIA GPU.

pskeshu commented 3 years ago

I've a GTX1660Ti with 6GB memory in my laptop. I can get a better one if you have suggestions.

On Tue 16 Feb, 2021, 11:22 PM Kevin Cutler, notifications@github.com wrote:

Okay. I've never trained anything more than MNIST before, so kinda nervous about that πŸ˜…

On Tue 16 Feb, 2021, 11:38 AM Kevin Cutler, notifications@github.com wrote:

Haha, so are you saying training from scratch is preferable to retraining cellpose models?

It can be. I would try both, and possibly even training with the original cellpose dataset combined with your own training data (if it is available for download yet). Re-training starting from the cyto or nuclear models is just a particular initialization for the network, but it could be that randomized weights allow you to converge to a better model for your particular data.

β€” You are receiving this because you commented. Reply to this email directly, view it on GitHub

128 (comment)

https://github.com/MouseLand/cellpose/issues/128#issuecomment-779606739, or unsubscribe

https://github.com/notifications/unsubscribe-auth/ABVNN4HOY6VCMRKHQLYCXLTS7IDVZANCNFSM4RZAWKMA .

Haha it is just as easy! But it will be slow if you don't have a decent NVIDIA GPU.

β€” You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MouseLand/cellpose/issues/128#issuecomment-780010123, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVNN4EBOILI6ES2UC6GHCLS7KWFDANCNFSM4RZAWKMA .

iyasasse commented 3 years ago

as someone who is very new to cellpose, can someone guide me as to how one might be able to train cellpose with their own images?

I'm actively doing this myself. The GUI is designed as a tool for you to annotate your images efficiently using a mouse. The cellpose GUI has some documentation on the wiki. My personal workflow is to use photoshop and matlab (for pen input and programmatic file management, respectively), but I am happy to explain that if you are interested. The end result is the same: a directory of images and their corresponding label matrices. This is the directory you tell cellpose to train on using the CLI commands found in the wiki. I find it to work well to only have a hundred or so cells per image (for me that means dividing my images into 1/16ths) and then re-train the cyto model after you have a few of those images.

I am interested in your workflow, please can you explain it to me? I want to call cellpose in MatLab but can not get it to run segmentation

carsen-stringer commented 2 years ago

please use the latest version of cellpose if you want to train through the GUI (pip install cellpose --upgrade)