BiaPyX / BiaPy

Open source Python library for building bioimage analysis pipelines
https://BiaPyX.github.io
MIT License
123 stars 28 forks source link

TTA fails on semantic segmentation workflow #30

Closed iarganda closed 10 months ago

iarganda commented 10 months ago

I tested the 2D semantic segmentation workflow with the Lucchi dataset in its Colab notebook and it fails during testing when using TTA.

This is the output I get:

[11:12:43.377433] ############################
[11:12:43.377476] #  PREPARE TEST GENERATOR  #
[11:12:43.377489] ############################
[11:12:43.383550] Loading checkpoint from file /content/output/my_2d_semantic_segmentation/checkpoints/my_2d_semantic_segmentation_1-checkpoint-best.pth
[11:12:43.462119] Model weights loaded!
[11:12:43.463007] ###############
[11:12:43.463039] #  INFERENCE  #
[11:12:43.463050] ###############
[11:12:43.463070] Making predictions on test data . . .
  0% 0/165 [00:00<?, ?it/s]
  0% 0/1 [00:00<?, ?it/s][11:12:43.468880] Processing image(s): ['testing-0001.tif']
[11:12:43.469306] ### OV-CROP ###
[11:12:43.469339] Cropping (1, 768, 1024, 1) images into (256, 256, 1) with overlapping. . .
[11:12:43.469353] Minimum overlap selected: (0, 0)
[11:12:43.469372] Padding: (32, 32)
[11:12:43.470362] Real overlapping (%): 0.13020833333333334
[11:12:43.470399] Real overlapping (pixels): 25.0
[11:12:43.470413] 6 patches per (x,y) axis
[11:12:43.474004] **** New data shape is: (24, 256, 256, 1)
[11:12:43.474055] ### END OV-CROP ###

  0% 0/4 [00:00<?, ?it/s]

 25% 1/4 [00:00<00:00,  3.18it/s]

  0% 0/24 [00:00<?, ?it/s]

  4% 1/24 [00:00<00:05,  3.88it/s]

 12% 3/24 [00:00<00:02,  9.32it/s]

 21% 5/24 [00:00<00:01, 12.77it/s]

 29% 7/24 [00:00<00:01, 14.75it/s]

 38% 9/24 [00:00<00:00, 16.27it/s]

 46% 11/24 [00:00<00:00, 17.02it/s]

 54% 13/24 [00:00<00:00, 17.91it/s]

 62% 15/24 [00:00<00:00, 18.21it/s]

 75% 18/24 [00:01<00:00, 18.90it/s]

 88% 21/24 [00:01<00:00, 19.41it/s]

100% 24/24 [00:01<00:00, 19.26it/s]

                                   [11:12:45.268981] ### MERGE-OV-CROP ###
[11:12:45.269026] Merging (24, 256, 256, 1) images into (1, 768, 1024, 1) with overlapping . . .
[11:12:45.269058] Minimum overlap selected: (0, 0)
[11:12:45.269085] Padding: (32, 32)
[11:12:45.269741] Real overlapping (%): (0.13020833333333334, 0.0)
[11:12:45.269765] Real overlapping (pixels): (25.0, 0.0)
[11:12:45.269779] (6, 4) patches per (x,y) axis
[11:12:45.280197] **** New data shape is: (1, 768, 1024, 1)
[11:12:45.280244] ### END MERGE-OV-CROP ###
[11:12:45.280723] Saving (1, 1, 768, 1024, 1) data as .tif in folder: /content/output/my_2d_semantic_segmentation/results/my_2d_semantic_segmentation_1/per_image

  0% 0/1 [00:00<?, ?it/s]

                         [11:12:45.297158] Saving (1, 768, 1024, 1) data as .tif in folder: /content/output/my_2d_semantic_segmentation/results/my_2d_semantic_segmentation_1/per_image_binarized

  0% 0/1 [00:00<?, ?it/s]

  0% 0/165 [00:02<?, ?it/s]
Traceback (most recent call last):
  File "/content/BiaPy/main.py", line 140, in <module>
    workflow.test()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/BiaPy/engine/base_workflow.py", line 645, in test
    self.process_sample(self.test_filenames[(i*l_X)+j:(i*l_X)+j+1], norm=(X_norm, Y_norm))
  File "/content/BiaPy/engine/base_workflow.py", line 1123, in process_sample
    pred = ensemble8_2d_predictions(
  File "/content/BiaPy/data/post_processing/post_processing.py", line 558, in ensemble8_2d_predictions
    r_aux = pred_func(total_img[i*batch_size_value:top])
  File "/content/BiaPy/engine/base_workflow.py", line 1128, in <lambda>
    to_numpy_format(self.apply_model_activations(img_batch_subdiv), self.axis_order_back)
  File "/content/BiaPy/engine/base_workflow.py", line 555, in apply_model_activations
    pred = act(pred)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/activation.py", line 292, in forward
    return torch.sigmoid(input)
TypeError: sigmoid(): argument 'input' (position 1) must be Tensor, not numpy.ndarray
danifranco commented 10 months ago

Just pushed the correction.

iarganda commented 10 months ago

That was fast, thanks a lot!