biomedia-mira / stochastic_segmentation_networks

Stochastic Segmentation Networks
MIT License
65 stars 10 forks source link

Assertion Error in output size #2

Closed meghbhalerao closed 4 years ago

meghbhalerao commented 4 years ago

Hi all, thanks for the pytorch implementation! I am trying to train 3D DeepMedic. My input image size is: torch.Size([1, 3, 144, 144, 64]). I am instantiating the 3D DeepMedic using this (3 channels and 2 classes): model = DeepMedic(3,2) I am getting the following error message:

Traceback (most recent call last):
  File "../gandlf_run", line 92, in <module>
    main()
  File "../gandlf_run", line 87, in main
    TrainingManager(dataframe=data_full, headers = headers, outputDir=model_path, parameters=parameters, device=device)
  File "/gpfs/fs001/cbica/comp_space/bhaleram/GANDLF/GANDLF/training_manager.py", line 110, in TrainingManager
    device=device, parameters=parameters, holdoutDataFromPickle=holdoutData)
  File "/gpfs/fs001/cbica/comp_space/bhaleram/GANDLF/GANDLF/training_loop.py", line 265, in trainingLoop
    output = model(image)
  File "/cbica/home/bhaleram/.conda/envs/dss/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/gpfs/fs001/cbica/comp_space/bhaleram/GANDLF/GANDLF/models/deep_medic.py", line 70, in forward
    output_size = self.get_output_size(input_size)
  File "/gpfs/fs001/cbica/comp_space/bhaleram/GANDLF/GANDLF/models/base.py", line 58, in get_output_size
    assert all(o > 0 for o in output_size)
AssertionError

I am confused what the error is. Is it something to with the dimensions - since I am not passing the dimensions anywhere in the instantiation? Thanks for your help, Megh

MiguelMonteiro commented 4 years ago

Hello, There are two issues: 1) You seem to have an extra dimension in your image. How did you come to the conclusion that this is the shape of your image? If you are using the our dataloader, one image should have shape [num_channels, z, y, x], yours seems to have an extra dimension. 2) You need to adjust the input patch size to work for your images. The default patch size in the config is 110x110x110 which is bigger then one of the dimensions in your image. This can be changed in the config.

meghbhalerao commented 4 years ago

Hi - sorry for not being clear, I forgot to mention that I am using my own implementation of the training loop and dataloader and using only the DeepMedic Architecture from here.

  1. I am using a custom dataloader. The shape of the tensor passed to the model is in the format [batch_size, num_channels, x, y , z].
  2. Again, sorry for not being clear. The size that I have mentioned earlier is the patch size (the x,y,zpart), not the original image size. My patch size is [144,144,64], as mentioned earlier. Please do let me know what the issue is. Thanks
MiguelMonteiro commented 4 years ago

If you are just using the model class and nothing else then the issue is: Because our implementation does not use padding the output patch size is smaller then the input patch size. What is happening is that your patch is too small for the number of convolutions in the network creating a negative output patch size. This assertion prevents that. You need to have less convolutions in the network or pad your input.

meghbhalerao commented 4 years ago

Oh, okay, thanks. I will try using bigger patch size (greater than 110,110,110), and check.

MiguelMonteiro commented 4 years ago

Your patch-size choice implies that you are using images that are not in isotropic resolution, maybe also watch out for that if you want good performance.