HenriquesLab / ZeroCostDL4Mic

ZeroCostDL4Mic: A Google Colab based no-cost toolbox to explore Deep-Learning in Microscopy
MIT License
553 stars 129 forks source link

CARE (3D) Google Colab for reoslution improvement #304

Closed AlejandraRM67 closed 7 months ago

AlejandraRM67 commented 9 months ago

BUGS

1. When installing CARE and dependencies in Google colab I get the following error, but it does not seem to affect any subsequent process, should I ignore it?

Install CARE and dependencies:

Preparing metadata (setup.py) ... done error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. Building wheel for h5py (setup.py) ... error ERROR: Failed building wheel for h5py ERROR: Could not build wheels for h5py, which is required to install pyproject.toml-based projects

Preparing metadata (setup.py) ... done Building wheel for wget (setup.py) ... done

2. Also, I am trying to train a model to increase the resolution of images, using as source and target the same image with different resolutions: source: 204x204 pixels and a smaller spacing(.tif with z=54) target: 204x2048 pixels and a higher spacing (.tif with z=54) When I run the step Create the model and dataset objects, i get the error below(for this example I load just 1 pair of images). It seems that I cannot use this tool (CARE (3D) Google colab) to increase the resolution of images, as I cannot train a model where the source and target images have different resolution, is this correct? Or should I train a model with images of equal dimensions but different quality (e.g. a source image with noise) and then when predicting I can use a lower resolution image (e.g. 204x204 pixels) and get a higher resolution image (e.g. 2048x2048 pixels).

1 raw images x 1 transformations = 1 images 1 images x 200 patches per image = 200 patches in total

Input data: /content: target='/content/gdrive/MyDrive/Colab Notebooks/data/input/train_up', sources=['/content/gdrive/MyDrive/Colab Notebooks/data/input/train_down'], axes='ZYX', pattern='.tif'

Transformations: 1 x Identity

Patch size: 8 x 80 x 80

0%| | 0/1 [00:00<?, ?it/s]

ValueError Traceback (most recent call last) in <cell line: 24>() 22 pattern='.tif' 23 ) ---> 24 X, Y, XY_axes = create_patches ( 25 raw_data = raw_data, 26 patch_size = (patch_height,patch_size,patch_size),

1 frames /usr/local/lib/python3.10/dist-packages/csbdeep/data/generate.py in create_patches(raw_data, patch_size, n_patches_per_image, patch_axes, save_file, transforms, patch_filter, normalization, shuffle, verbose) 339 # len(axes) >= x.ndim or _raise(ValueError()) 340 axes == axes_check_and_normalize(_axes) or _raise(ValueError('not all images have the same axes.')) --> 341 x.shape == y.shape or _raise(ValueError()) 342 mask is None or mask.shape == x.shape or _raise(ValueError()) 343 (channel is None or (isinstance(channel,int) and 0<=channel<x.ndim)) or _raise(ValueError())

/usr/local/lib/python3.10/dist-packages/csbdeep/utils/utils.py in _raise(e) 89 def _raise(e): 90 if isinstance(e, BaseException): ---> 91 raise e 92 else: 93 raise ValueError(e)

ValueError:

Desktop information

esgomezm commented 7 months ago

Dear @AlejandraRM67

Sorry for the late answer. To the first question, yes, you can ignore it. Regarding your second question, yes, the images need to have the same dimension in pixels. An easy way to go with this is upsampling the low resolution (source) image for example. You can easily do it in Fiji with Image>Scale. You can choose with or without interpolation. Please, let us know if this way it works and if it doesn't, reopen the issue with some snapshots of the errors and the images. Thank you! Esti