imjoy-team / imjoy-interactive-segmentation

MIT License
13 stars 6 forks source link

launch ImJoy Binder Open In Colab

ImJoy-powered Interactive Segmentation

This project enables deep learning powered interactive segmentation with ImJoy.

In contrast to traditional deep learning model training where all the annotations are collected before the training, interactive learning runs the model training while adding new annotations.

Key feature

Therefore, users can encourage the model to learn by feeding in appropriate data (eg. worse-performing samples).

Installation

  1. Install Anaconda or Miniconda
  2. Install Git
    conda install -c anaconda git
  3. Create interactive-ml environment
    
    conda create -n interactive-ml python=3.7.2 -y
    conda activate interactive-ml

git clone https://github.com/imjoy-team/imjoy-interactive-segmentation.git cd imjoy-interactive-segmentation pip install -r requirements.txt python -m ipykernel install --user --name imjoy-interactive-ml --display-name "ImJoy Interactive ML"


On Windows, if there is ```WindowsError: [Error 126]``` then install the module separately inside interactive-ml terminal. For example:
```bash
pip install -c conda-forge shapely

Usage

Start a the jupyter notebook server with ImJoy

jupyter notebook

Importantly, create a notebook file with kernel spec named "ImJoy Interactive ML".

You can download our example dataset to get started:

# this will save the example dataset to `./data/hpa_dataset_v2`
python download_example_dataset.py

Create a jupyter notebook and run the followin code in a cell:

from imjoy_plugin import start_interactive_segmentation

model_config = dict(type="cellpose",
            model_dir='./data/hpa_dataset_v2/__models__',
            channels=[2, 3],
            style_on=0,
            default_diameter=100,
            use_gpu=True,
            pretrained_model=False,
            resume=True)

start_interactive_segmentation(model_config,
                               "./data/hpa_dataset_v2",
                               ["microtubules.png", "er.png", "nuclei.png"],
                               mask_type="labels",
                               object_name="cell",
                               scale_factor=1.0)

We also made a python notebook to illustrate the whole interactive training workflow in tutorial.ipynb