kevinjohncutler / omnipose

Omnipose: a high-precision solution for morphology-independent cell segmentation
https://omnipose.readthedocs.io
Other
96 stars 29 forks source link

Issues using pretrained models. #41

Closed georgeoshardo closed 1 year ago

georgeoshardo commented 1 year ago

Hello @kevinjohncutler

I'm having trouble with retraining pretrained models. Is this currently supported in Omnipose? I could not get any pretrained model to work, including ones from the same dataset (in order to resume training, but eventually I would like to use other pretrained models to attempt to refine and fine-tune them).

I am getting the following error:

/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/core.py:999: RuntimeWarning: divide by zero encountered in divide
  rsc = diam_train[inds] / self.diam_mean if rescale else np.ones(len(inds), np.float32)
Traceback (most recent call last):
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/omnipose/__main__.py", line 3, in <module>
    main(omni_CLI=True)
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/__main__.py", line 476, in main
    cpmodel_path = model.train(images, labels, train_files=image_names,
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/models.py", line 1045, in train
    model_path = self._train_net(train_data, train_labels, 
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/core.py", line 1001, in _train_net
    imgi, lbl, scale = transforms.random_rotate_and_resize(
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/transforms.py", line 839, in random_rotate_and_resize
    return omnipose.core.random_rotate_and_resize(X, Y=Y, scale_range=scale_range, gamma_range=gamma_range,
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/omnipose/core.py", line 1387, in random_rotate_and_resize
    imgi[n], lbl[n], scale[n] = random_crop_warp(img, y, nt, tyx, nchan, scale[n], 
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/omnipose/core.py", line 1491, in random_crop_warp
    offset = c_in - np.dot(np.linalg.inv(M), c_out)
  File "<__array_function__ internals>", line 180, in inv
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/numpy/linalg/linalg.py", line 552, in inv
    ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/numpy/linalg/linalg.py", line 89, in _raise_linalgerror_singular
    raise LinAlgError("Singular matrix")
numpy.linalg.LinAlgError: Singular matrix

Steps to reproduce:

Create a new test environment and install the PyPi release of Omnipose. I create a blank environment and install Pytorch this way because I'm using CUDA 11.7

conda create -n omnipose_test python=3.10 && conda activate omnipose_test && pip install omnipose torch torchvision torchaudio

Train a fresh model on some training data for an epoch or so, just to generate a model file, and all is fine:

$ python -m omnipose --train --use_gpu --dir /home/georgeos/SSD/omnipose_SyMBac_TD/  --mask_filter "_masks" --n_epochs 4000 --pretrained_model None --save_every 100 --save_each --learning_rate 0.1 --diameter 0 --batch_size 16

!NEW LOGGING SETUP! To see cellpose progress, set --verbose
No --verbose => no progress or info printed
2023-03-07 07:47:56,403 [INFO] ** TORCH GPU version installed and working. **
2023-03-07 07:47:56,403 [INFO] >>>> using GPU
Omnipose enabled. See Omnipose repo for licencing details.
2023-03-07 07:47:56,403 [INFO] Training omni model. Setting nclasses=4, RAdam=True
2023-03-07 07:47:57,729 [INFO] not all flows are present, will run flow generation for all images
2023-03-07 07:47:58,387 [INFO] training from scratch
2023-03-07 07:47:58,388 [INFO] median diameter set to 0 => no rescaling during training
2023-03-07 07:48:10,668 [INFO] No precomuting flows with Omnipose. Computed during training.
2023-03-07 07:48:11,336 [INFO] >>> Using RAdam optimizer
2023-03-07 07:48:11,336 [INFO] >>>> training network with 2 channel input <<<<
2023-03-07 07:48:11,336 [INFO] >>>> LR: 0.10000, batch_size: 16, weight_decay: 0.00001
2023-03-07 07:48:11,336 [INFO] >>>> ntrain = 200
2023-03-07 07:48:11,336 [INFO] >>>> nimg_per_epoch = 200
2023-03-07 07:48:25,789 [INFO] Epoch 0, Time 14.5s, Loss 15.3112, LR 0.1000
2023-03-07 07:48:37,545 [INFO] saving network parameters to /home/georgeos/SSD/omnipose_SyMBac_TD/models/cellpose_residual_on_style_on_concatenation_off_omni_nclasses_4_omnipose_SyMBac_TD_2023_03_07_07_48_11.332821_epoch_1

Stop training after a model has saved, attempt to train again but with that model as the pretrained model, and I get the error:

$ python -m omnipose --train --use_gpu --dir /home/georgeos/SSD/omnipose_SyMBac_TD/  --mask_filter "_masks" --n_epochs 4000 --pretrained_model /home/georgeos/SSD/omnipose_SyMBac_TD/models/cellpose_residual_on_style_on_concatenation_off_omni_nclasses_4_omnipose_SyMBac_TD_2023_03_07_07_48_11.332821_epoch_1 --save_every 100 --save_each --learning_rate 0.1 --diameter 0 --batch_size 16

!NEW LOGGING SETUP! To see cellpose progress, set --verbose
No --verbose => no progress or info printed
2023-03-07 07:50:01,667 [INFO] ** TORCH GPU version installed and working. **
2023-03-07 07:50:01,667 [INFO] >>>> using GPU
Omnipose enabled. See Omnipose repo for licencing details.
2023-03-07 07:50:01,667 [INFO] Training omni model. Setting nclasses=4, RAdam=True
2023-03-07 07:50:03,009 [INFO] not all flows are present, will run flow generation for all images
2023-03-07 07:50:03,675 [INFO] pretrained model /home/georgeos/SSD/omnipose_SyMBac_TD/models/cellpose_residual_on_style_on_concatenation_off_omni_nclasses_4_omnipose_SyMBac_TD_2023_03_07_07_48_11.332821_epoch_1 is being used
2023-03-07 07:50:03,675 [INFO] during training rescaling images to fixed diameter of 0.0 pixels
2023-03-07 07:50:03,764 [INFO] Training with rescale = 1.00
2023-03-07 07:50:15,611 [INFO] No precomuting flows with Omnipose. Computed during training.
2023-03-07 07:50:16,309 [INFO] >>> Using RAdam optimizer
2023-03-07 07:50:19,696 [INFO] >>>> median diameter set to = 0
2023-03-07 07:50:19,696 [INFO] >>>> training network with 2 channel input <<<<
2023-03-07 07:50:19,696 [INFO] >>>> LR: 0.10000, batch_size: 16, weight_decay: 0.00001
2023-03-07 07:50:19,696 [INFO] >>>> ntrain = 200
2023-03-07 07:50:19,696 [INFO] >>>> nimg_per_epoch = 200
/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/core.py:999: RuntimeWarning: divide by zero encountered in divide
  rsc = diam_train[inds] / self.diam_mean if rescale else np.ones(len(inds), np.float32)
Traceback (most recent call last):
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/omnipose/__main__.py", line 3, in <module>
    main(omni_CLI=True)
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/__main__.py", line 476, in main
    cpmodel_path = model.train(images, labels, train_files=image_names,
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/models.py", line 1045, in train
    model_path = self._train_net(train_data, train_labels, 
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/core.py", line 1001, in _train_net
    imgi, lbl, scale = transforms.random_rotate_and_resize(
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/cellpose/transforms.py", line 839, in random_rotate_and_resize
    return omnipose.core.random_rotate_and_resize(X, Y=Y, scale_range=scale_range, gamma_range=gamma_range,
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/omnipose/core.py", line 1387, in random_rotate_and_resize
    imgi[n], lbl[n], scale[n] = random_crop_warp(img, y, nt, tyx, nchan, scale[n], 
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/omnipose/core.py", line 1491, in random_crop_warp
    offset = c_in - np.dot(np.linalg.inv(M), c_out)
  File "<__array_function__ internals>", line 180, in inv
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/numpy/linalg/linalg.py", line 552, in inv
    ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
  File "/home/georgeos/miniconda3/envs/omnipose_test/lib/python3.10/site-packages/numpy/linalg/linalg.py", line 89, in _raise_linalgerror_singular
    raise LinAlgError("Singular matrix")
numpy.linalg.LinAlgError: Singular matrix

Is this a problem with Omnipose? I get this error regardless of which pretrained Omnipose model I use.

Thanks in advance!

PS: here is a link to the training data I am using: https://www.dropbox.com/s/s6pvb7ymkjxkf07/omnipose_SyMBac_TD.zip?dl=0

kevinjohncutler commented 1 year ago

@georgeoshardo Sorry for the delay, and for the bug. This is one I've known about for a while but never resolved, as I always got better results training from scratch (plus easier to reproduce final models). I think it has something to do with how the model gets initialized from the pretrained model string. In particular, the diameter/rescaling parameters (also something I seldom use) don't get set correctly and cause the transformations to go awry. I'll find some time soon to fix this.

kevinjohncutler commented 1 year ago

@georgeoshardo Try out the latest commit (the real change is in the cellpose_omni repo, but pull the changes from both repos). Turned out it was just one line that forced all pre-trained runs to use rescaling (all pretrained Cellpose models like cyto2 use rescaling), so I got rid of that. If people need rescaling for their models, they just need to specify it in the retraining round as well.

georgeoshardo commented 1 year ago

Thanks @kevinjohncutler! Things are working flawlessly now. I can even train from models such as bact_phase_omni now.