kevinjohncutler / omnipose

Omnipose: a high-precision solution for morphology-independent cell segmentation
https://omnipose.readthedocs.io
Other
96 stars 29 forks source link

2D training using an omnipose pretrained model #40

Closed nitsbar closed 1 year ago

nitsbar commented 1 year ago

Hey @kevinjohncutler I want to add training to the Omnipose pre-trained models 'bact_phase_omni' and 'bact_flour_omni'. I tried writing their path in the pretrained_model parameter: model_path = r'C:\Users\nitsa\.cellpose\models\bact_phase_cptorch_0' model = CellposeModel( gpu=False,pretrained_model=model_path, omni=True, nclasses=4, nchan=1, diam_mean=0 ) I get the following error: failed to load model Error(s) in loading state_dict for CPnet: size mismatch for downsample.down.res_down_0.conv.conv_0.0.weight: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.conv.conv_0.0.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.conv.conv_0.0.running_mean: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.conv.conv_0.0.running_var: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.conv.conv_0.2.weight: copying a param with shape torch.Size([32, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 1, 3, 3]). size mismatch for downsample.down.res_down_0.proj.0.weight: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.proj.0.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.proj.0.running_mean: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.proj.0.running_var: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]). size mismatch for downsample.down.res_down_0.proj.1.weight: copying a param with shape torch.Size([32, 2, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1, 1, 1]). size mismatch for output.2.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([4, 32, 1, 1]). size mismatch for output.2.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([4]).

Is there a different way to train these models? Thank you so much for your help!

kevinjohncutler commented 1 year ago

@nitsbar Sorry for the delay. Issue #41 also pertains to pretrained models, and there is a more important bug there that needs to be fixed. In your case, I think you just did not enter in all the parameters. You need to match some of the core parameters of the original pertained model, especially --omni --nclasses 4. I'll follow up soon once that other bug is fixed.

kevinjohncutler commented 1 year ago

@nitsbar #41 is closed, I think that and my above parameter recommendations should solve your issue. Please reopen this if you have any further questions!