HenriquesLab / ZeroCostDL4Mic

ZeroCostDL4Mic: A Google Colab based no-cost toolbox to explore Deep-Learning in Microscopy
MIT License
566 stars 130 forks source link

Error in 1.3. Load key dependencies StarDist_2D #249

Open BelES123 opened 1 year ago

BelES123 commented 1 year ago

Hello, I run into the following problem trying to load key dependencies for StarDist_2D (1.3. Load key dependencies). Installation of 1.1 StarDist and dependencies goes fine.

2.11.0 Tensorflow enabled.

Libraries installed Notebook version: 1.18 Latest notebook version: 1.18 This notebook is up-to-date.

ValueError Traceback (most recent call last) in 472 # Build requirements file for local run 473 after = [str(m) for m in sys.modules] --> 474 build_requirements_file(before, after)

3 frames /usr/local/lib/python3.9/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults) 1534 1535 if delimiter == "\n": -> 1536 raise ValueError( 1537 r"Specified \n as separator or delimiter. This forces the python engine " 1538 "which does not accept a line terminator. Hence it is not allowed to use "

ValueError: Specified \n as separator or delimiter. This forces the python engine which does not accept a line terminator. Hence it is not allowed to use the line terminator as separator.

clairepiperr commented 1 year ago

Hi,

I am having the same error with 1.3 Load Key dependencies crashing.

2.11.0 Tensorflow enabled. site.config.json: 5.71kiB [00:00, 2.58MiB/s]
_resolve_source.py (471): Download (5705) does not have expected size (2006). collection.json: 189kiB [00:00, 24.5MiB/s]
_resolve_source.py (471): Download (189473) does not have expected size (24320).

Libraries installed Notebook version: 1.18 Latest notebook version: 1.18 This notebook is up-to-date.

ParserError Traceback (most recent call last) in 471 # Build requirements file for local run 472 after = [str(m) for m in sys.modules] --> 473 build_requirements_file(before, after)

let me know

stardist error

guijacquemet commented 1 year ago

Hi, I posted a quick fix. Let me know if it works on your side!

Cheers

Guillaume

clairepiperr commented 1 year ago

Hi,

Yes that seems to be working now,

Thanks :)

Claire

clairepiperr commented 1 year ago

I managed to get to 3.3 fine this time but now having an error using pretrained weights. Seems the download path isnt working even though it was earlier when I had the other error

image

Any advice?

guijacquemet commented 1 year ago

I cannot reproduce the error on my side. What settings are you using?

clairepiperr commented 1 year ago

I get the same error regardless of which model i use image

guijacquemet commented 1 year ago

My best guess is that you need to restart your Google Colab session. Looks like a connection issue

BelES123 commented 1 year ago

Hi, I posted a quick fix. Let me know if it works on your side!

Cheers

Guillaume

Thank you very much, Guillaume, it works now!

I have another question regarding StarDist training.

In step 4.1 Prepare the training data and model for training I get the following warning.

_WARNING: median object size larger than field of view of the neural network. Config2D(n_dim=2, axes='YXC', n_channel_in=1, n_channel_out=33, train_checkpoint='weights_best.h5', train_checkpoint_last='weights_last.h5', train_checkpoint_epoch='weights_now.h5', n_rays=32, grid=(2, 2), backbone='unet', n_classes=None, unet_n_depth=3, unet_kernel_size=(3, 3), unet_n_filter_base=32, unet_n_conv_per_depth=2, unet_pool=(2, 2), unet_activation='relu', unet_last_activation='relu', unet_batch_norm=False, unet_dropout=0.0, unet_prefix='', net_conv_after_unet=128, net_input_shape=(None, None, 1), net_mask_shape=(None, None, 1), train_shape_completion=False, train_completion_crop=32, train_patch_size=(1024, 1024), train_background_reg=0.0001, train_foreground_only=0.9, train_sample_cache=True, train_dist_loss='mae', train_loss_weights=(1, 0.2), train_class_weights=(1, 1), train_epochs=400, train_steps_per_epoch=100, train_learning_rate=0.0003, train_batch_size=2, train_n_val_patches=None, train_tensorboard=True, train_reduce_lr={'factor': 0.5, 'patience': 40, 'min_delta': 0}, usegpu=False) Number of steps: 7

Also, Stardist doesn't perform well after training. I believe this is because I am trying to recognize degradation spots that are much smaller than nuclei. Is there a way to input a median size of the object into the model?

guijacquemet commented 1 year ago

Hi,

Not directly. In the training advanced parameters, you could try playing with: grid_parameter: increase this number if the cells/nuclei are very large or decrease it if they are very small. Default value: 2

Could you post an example image?

BelES123 commented 1 year ago

Hi,

Not directly. In the training advanced parameters, you could try playing with: grid_parameter: increase this number if the cells/nuclei are very large or decrease it if they are very small. Default value: 2

Could you post an example image?

Hello Guillaume, here is an original image. But for training Stardist I invert it (second image) and the third image is my mask.

Image_original

Image

Mask