Open kmshort opened 3 months ago
And the available switches for CellPose 3.0.7:
usage: cellpose [-h] [--version] [--verbose] [--Zstack] [--use_gpu] [--gpu_device GPU_DEVICE] [--check_mkl]
[--dir DIR] [--image_path IMAGE_PATH] [--look_one_level_down] [--img_filter IMG_FILTER]
[--channel_axis CHANNEL_AXIS] [--z_axis Z_AXIS] [--chan CHAN] [--chan2 CHAN2] [--invert]
[--all_channels] [--pretrained_model PRETRAINED_MODEL] [--restore_type RESTORE_TYPE] [--chan2_restore]
[--add_model ADD_MODEL] [--no_resample] [--no_interp] [--no_norm] [--do_3D] [--diameter DIAMETER]
[--stitch_threshold STITCH_THRESHOLD] [--min_size MIN_SIZE] [--flow_threshold FLOW_THRESHOLD]
[--cellprob_threshold CELLPROB_THRESHOLD] [--niter NITER] [--anisotropy ANISOTROPY]
[--exclude_on_edges] [--augment] [--save_png] [--save_tif] [--no_npy] [--savedir SAVEDIR]
[--dir_above] [--in_folders] [--save_flows] [--save_outlines] [--save_rois] [--save_txt] [--save_mpl]
[--train] [--train_size] [--test_dir TEST_DIR] [--mask_filter MASK_FILTER] [--diam_mean DIAM_MEAN]
[--learning_rate LEARNING_RATE] [--weight_decay WEIGHT_DECAY] [--n_epochs N_EPOCHS]
[--batch_size BATCH_SIZE] [--min_train_masks MIN_TRAIN_MASKS] [--SGD SGD] [--save_every SAVE_EVERY]
[--model_name_out MODEL_NAME_OUT]
Cellpose Command Line Parameters
optional arguments:
-h, --help show this help message and exit
--version show cellpose version info
--verbose show information about running and settings and save to log
--Zstack run GUI in 3D mode
Hardware Arguments:
--use_gpu use gpu if torch with cuda installed
--gpu_device GPU_DEVICE
which gpu device to use, use an integer for torch, or mps for M1
--check_mkl check if mkl working
Input Image Arguments:
--dir DIR folder containing data to run or train on.
--image_path IMAGE_PATH
if given and --dir not given, run on single image instead of folder (cannot train with this
option)
--look_one_level_down
run processing on all subdirectories of current folder
--img_filter IMG_FILTER
end string for images to run on
--channel_axis CHANNEL_AXIS
axis of image which corresponds to image channels
--z_axis Z_AXIS axis of image which corresponds to Z dimension
--chan CHAN channel to segment; 0: GRAY, 1: RED, 2: GREEN, 3: BLUE. Default: 0
--chan2 CHAN2 nuclear channel (if cyto, optional); 0: NONE, 1: RED, 2: GREEN, 3: BLUE. Default: 0
--invert invert grayscale channel
--all_channels use all channels in image if using own model and images with special channels
Model Arguments:
--pretrained_model PRETRAINED_MODEL
model to use for running or starting training
--restore_type RESTORE_TYPE
model to use for image restoration
--chan2_restore use nuclei restore model for second channel
--add_model ADD_MODEL
model path to copy model to hidden .cellpose folder for using in GUI/CLI
Algorithm Arguments:
--no_resample disable dynamics on full image (makes algorithm faster for images with large diameters)
--no_interp do not interpolate when running dynamics (was default)
--no_norm do not normalize images (normalize=False)
--do_3D process images as 3D stacks of images (nplanes x nchan x Ly x Lx
--diameter DIAMETER cell diameter, if 0 will use the diameter of the training labels used in the model, or with
built-in model will estimate diameter for each image
--stitch_threshold STITCH_THRESHOLD
compute masks in 2D then stitch together masks with IoU>0.9 across planes
--min_size MIN_SIZE minimum number of pixels per mask, can turn off with -1
--flow_threshold FLOW_THRESHOLD
flow error threshold, 0 turns off this optional QC step. Default: 0.4
--cellprob_threshold CELLPROB_THRESHOLD
cellprob threshold, default is 0, decrease to find more and larger masks
--niter NITER niter, number of iterations for dynamics for mask creation, default of 0 means it is
proportional to diameter, set to a larger number like 2000 for very long ROIs
--anisotropy ANISOTROPY
anisotropy of volume in 3D
--exclude_on_edges discard masks which touch edges of image
--augment tiles image with overlapping tiles and flips overlapped regions to augment
Output Arguments:
--save_png save masks as png and outlines as text file for ImageJ
--save_tif save masks as tif and outlines as text file for ImageJ
--no_npy suppress saving of npy
--savedir SAVEDIR folder to which segmentation results will be saved (defaults to input image directory)
--dir_above save output folders adjacent to image folder instead of inside it (off by default)
--in_folders flag to save output in folders (off by default)
--save_flows whether or not to save RGB images of flows when masks are saved (disabled by default)
--save_outlines whether or not to save RGB outline images when masks are saved (disabled by default)
--save_rois whether or not to save ImageJ compatible ROI archive (disabled by default)
--save_txt flag to enable txt outlines for ImageJ (disabled by default)
--save_mpl save a figure of image/mask/flows using matplotlib (disabled by default). This is slow,
especially with large images.
Training Arguments:
--train train network using images in dir
--train_size train size network at end of training
--test_dir TEST_DIR folder containing test data (optional)
--mask_filter MASK_FILTER
end string for masks to run on. use '_seg.npy' for manual annotations from the GUI. Default:
_masks
--diam_mean DIAM_MEAN
mean diameter to resize cells to during training -- if starting from pretrained models it
cannot be changed from 30.0
--learning_rate LEARNING_RATE
learning rate. Default: 0.2
--weight_decay WEIGHT_DECAY
weight decay. Default: 1e-05
--n_epochs N_EPOCHS number of epochs. Default: 500
--batch_size BATCH_SIZE
batch size. Default: 8
--min_train_masks MIN_TRAIN_MASKS
minimum number of masks a training image must have to be used. Default: 5
--SGD SGD use SGD
--save_every SAVE_EVERY
number of epochs to skip between saves. Default: 100
--model_name_out MODEL_NAME_OUT
Name of model to save as, defaults to name describing model architecture. Model is saved in
the folder specified by --dir in models subfolder.
Firstly, my versioning information:
I am using the Cuda SDK v12.4, and torch version 2.4.0
I am using a conda environment with both CellProfiler and Cellpose running in it. I have installed the GUI version of CellPose, and it runs well with CUDA/GPU processing enabled.
However, there are issues with the RunCellpose plugin in CellProfiler. runcellpose.py was pulled via git clone today, 8th August 2024.
Issues:
In the CellProfiler project pipeline, the RunCellpose items are crossed ❌ with the "Value not in decimal format" notification. With the workaround I have, that issue doesn't affect the processing.
The models in the Cellpose 2.x are now mostly removed, but are referenced by the current runcellpose.py.
net_avg
used by the plugin is no longer used by CellPose 3.0.7, so the plugin throws an error and exits.If I comment out
net_avg
lines 536, 551, and 610. The plugin works, but with CPU only (see next).The Test GPU button does not work. The GPU detection fails (the reference to
core.use_gpu
on line 717 in the script fails, because there's no reference tocore
:When using the GPU anyway, the process fails even though that there is a line that says:
****Worker 0: TORCH CUDA version installed and working. ****
:So in summary, there are a few major issues with runcellpose.py and CellPose 3.0.7. My current edits to remove net_avg, make it work, but only via CPU. I am using custom trained models generated in CellPose 3.0.7.
I hope these issues can be resolved. It would be good to get GPU processing working. I'll continue to look at the python script to see what I can do.