qupath / qupath-extension-instanseg

The official QuPath extension for InstanSeg
https://github.com/instanseg/instanseg
Apache License 2.0
10 stars 5 forks source link

QuPath InstanSeg extension

Forum

InstanSeg logo

🚧 Work-in-progress! 🚧

The easiest way to try this out is with the new QuPath 0.6.0 Release Candidate!


Welcome to the InstanSeg extension for QuPath!

InstanSeg is a novel deep-learning-based method for segmenting nuclei and cells... and potentially much more.

Looking for the main InstanSeg code for training models? Find it here.

What is InstanSeg?

You can learn more about InstanSeg in two preprints.

For an introduction & comparison to other approaches for nucleus segmentation in brightfield histology images, see:

Goldsborough, T. et al. (2024) ‘InstanSeg: an embedding-based instance segmentation algorithm optimized for accurate, efficient and portable cell segmentation’. arXiv. Available at: https://doi.org/10.48550/arXiv.2408.15954.

To read about InstanSeg's extension to nucleus + full cell segmentation and support for fluorescence & multiplexed images, see:

Goldsborough, T. et al. (2024) ‘A novel channel invariant architecture for the segmentation of cells and nuclei in multiplexed images using InstanSeg’. bioRxiv, p. 2024.09.04.611150. Available at: https://doi.org/10.1101/2024.09.04.611150.

Why should I try InstanSeg?

  1. It's fully open-source
    • We also provide models pre-trained on open datasets
  2. It's not limited to nuclei... or to cells
    • One model can provide different outputs: nuclei, cells, or both
  3. It's accurate compared to all the popular alternative methods
    • In our hands InstanSeg consistently achieved the best F1 score across multiple datasets compared to CellPose, StarDist, HoVerNet and Mesmer. But everyone's images are different & fair benchmarking is hard - check out the preprints & judge what works best for you!
  4. It's faster than other methods (usually much faster)
    • InstanSeg supports GPU acceleration with CUDA and with Apple Silicon (so Mac users can finally have fast segmentation too!)
  5. It's portable
    • The full pipeline including postprocessing compiles to TorchScript - so you can also run it from Python & DeepImageJ.
  6. It's easy to use
    • InstanSeg models are trained at a specific pixel resolution (e.g. 0.5 µm/px). As long as your image has pixel calibration set, this extension will deal with any resizing needed to keep InstanSeg happy.
  7. It's uniquely easy to use for fluorescence & multiplexed images
    • When used with ChannelNet, InstanSeg supports any* number of channels in any order! There's no need to select channels manually, or retrain for different markers.

*-Well, at least as much as your computer's memory allows

How do I get InstanSeg in QuPath?

This extension is for QuPath v0.6.0... which we plan to release in October 2024.

If you can't wait, you can try the release candidate v0.6.0-rc1 - which comes with this extension pre-installed, along with the Deep Java Library Extension.

GPU support

If you have an NVIDIA graphics card & want to use CUDA, check out GPU support.

If you use a recent Mac with Apple silicon, no special configuration should be needed - just choose 'MPS' as the device (described below).

How do I run InstanSeg in QuPath?

There are two steps to get started:

  1. Use Extensions → Deep Java Library → Manage DJL Engines to download PyTorch
    • See docs here - especially if you want it to work with CUDA (which can be tricky to configure)
  2. Use Extensions → InstanSeg → Run InstanSeg to launch InstanSeg

The dialog should guide you through what to do next:

  1. Choose a directory on your computer to download pre-trained models
  2. Pick a model and download it
  3. Select one or more annotations
  4. Press Run

What do the additional options do?

There are several options available to customize things:

How do I run this across multiple images?

The extension is scriptable - the core parameters are logged in the history, and can be converted into a script.

See Workflows to scripts in the docs for more details.

Or modify the example script below:

qupath.ext.instanseg.core.InstanSeg.builder()
    .modelPath("/path/to/some/model")
    .device("mps")
    .nThreads(4)
    .tileDims(512)
    .interTilePadding(32)
    .inputChannels([ColorTransforms.createChannelExtractor("Red"), ColorTransforms.createChannelExtractor("Green"), ColorTransforms.createChannelExtractor("Blue")])
    .outputChannels()
    .makeMeasurements(true)
    .randomColors(false)
    .build()
    .detectObjects()

How do I cite this?

If you use this extension in any published work, we ask you to please cite

  1. At least one of the two InstanSeg preprints above (whichever is most relevant)
  2. The main QuPath paper - details here