SBU-BMI / wsinfer

🔥 🚀 Blazingly fast pipeline for patch-based classification in whole slide images
https://wsinfer.readthedocs.io
Apache License 2.0
56 stars 9 forks source link

wsinfer run --help needs updating #159

Closed ebremer closed 1 year ago

ebremer commented 1 year ago

see"To list all available models and weights, use wsinfer ls."

wsinfer run --help
Usage: wsinfer run [OPTIONS]

  Run model inference on a directory of whole slide images.

  This command will create a tissue mask of each WSI. Then patch coordinates
  will be computed. The chosen model will be applied to each patch, and the
  results will be saved to a CSV in `RESULTS_DIR/model-output`.

  Example:

  CUDA_VISIBLE_DEVICES=0 wsinfer run --wsi-dir slides/ --results-dir results
  --model breast-tumor-resnet34.tcga-brca --batch-size 32 --num-workers 4

  To list all available models and weights, use `wsinfer ls`.

Options:
  -i, --wsi-dir DIRECTORY         Directory containing whole slide images.
                                  This directory can *only* contain whole
                                  slide images.  [required]
  -o, --results-dir DIRECTORY     Directory to store results. If directory
                                  exists, will skip whole slides for which
                                  outputs exist.  [required]
  -m, --model [breast-tumor-inception_v4.tcga-brca|breast-tumor-resnet34.tcga-brca|breast-tumor-vgg16mod.tcga-brca|colorectal-tiatoolbox-resnet50.kather100k|lung-tumor-resnet34.tcga-luad|lymphnodes-tiatoolbox-resnet50.patchcamelyon|pancancer-lymphocytes-inceptionv4.tcga|pancreas-tumor-preactresnet34.tcga-paad|prostate-tumor-resnet34.tcga-prad]
                                  Name of the model to use from WSInfer Model
                                  Zoo. Mutually exclusive with --config.
  -c, --config FILE               Path to configuration for the trained model.
                                  Use this option if the model weights are not
                                  registered in wsinfer. Mutually exclusive
                                  with--model
  -p, --model-path FILE           Path to the pretrained model. Use only when
                                  --config is passed. Mutually exclusive with
                                  --model.
  -b, --batch-size INTEGER RANGE  Batch size during model inference. If using
                                  multiple GPUs, increase the batch size.
                                  [default: 32; x>=1]
  -n, --num-workers INTEGER RANGE
                                  Number of workers to use for data loading
                                  during model inference (n=0 for single
                                  thread). Set this to the number of cores on
                                  your machine or lower.  [default: 8; x>=0]
  --speedup / --no-speedup        JIT-compile the model and apply inference
                                  optimizations. This imposes a startup cost
                                  but may improve performance overall.
                                  [default: no-speedup]
  --help                          Show this message and exit.