Closed meghbhalerao closed 4 years ago
Hello, There are two issues: 1) You seem to have an extra dimension in your image. How did you come to the conclusion that this is the shape of your image? If you are using the our dataloader, one image should have shape [num_channels, z, y, x], yours seems to have an extra dimension. 2) You need to adjust the input patch size to work for your images. The default patch size in the config is 110x110x110 which is bigger then one of the dimensions in your image. This can be changed in the config.
Hi - sorry for not being clear, I forgot to mention that I am using my own implementation of the training loop and dataloader and using only the DeepMedic Architecture from here.
[batch_size, num_channels, x, y , z]
. x,y,z
part), not the original image size. My patch size is [144,144,64]
, as mentioned earlier.
Please do let me know what the issue is.
ThanksIf you are just using the model class and nothing else then the issue is: Because our implementation does not use padding the output patch size is smaller then the input patch size. What is happening is that your patch is too small for the number of convolutions in the network creating a negative output patch size. This assertion prevents that. You need to have less convolutions in the network or pad your input.
Oh, okay, thanks. I will try using bigger patch size (greater than 110,110,110
), and check.
Your patch-size choice implies that you are using images that are not in isotropic resolution, maybe also watch out for that if you want good performance.
Hi all, thanks for the pytorch implementation! I am trying to train 3D DeepMedic. My input image size is:
torch.Size([1, 3, 144, 144, 64])
. I am instantiating the 3D DeepMedic using this (3 channels and 2 classes):model = DeepMedic(3,2)
I am getting the following error message:I am confused what the error is. Is it something to with the
dimensions
- since I am not passing thedimensions
anywhere in the instantiation? Thanks for your help, Megh