black0017 / MedicalZooPytorch

A pytorch-based deep learning framework for multi-modal 2D/3D medical image segmentation
MIT License
1.72k stars 299 forks source link

RuntimeError: Expected 5-dimensional input for 5-dimensional weight 8 4 3 3 3, but got 4-dimensional input of size [4, 256, 64, 64] instead #6

Closed xuanxu92 closed 4 years ago

xuanxu92 commented 4 years ago

Hi, When I want to train on BraTS2018, I use this command line "python train_brats2018_new.py"

It shows this problem

Summary train Epoch 1: Loss:0.7025 DSC:29.7528 Background : 0.8263 NCR/NET : 0.0612 ED : 0.2336 ET : 0.0689 Traceback (most recent call last): File "train_brats2018_new.py", line 75, in main() File "train_brats2018_new.py", line 33, in main trainer.training() File "/content/drive/My Drive/MedicalZooPytorch/lib/train/trainer.py", line 38, in training self.validate_epoch(epoch) File "/content/drive/My Drive/MedicalZooPytorch/lib/train/trainer.py", line 83, in validate_epoch output = self.model(input_tensor) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, kwargs) File "/content/drive/My Drive/MedicalZooPytorch/lib/medzoo/Vnet.py", line 150, in forward out16 = self.in_tr(x) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, *kwargs) File "/content/drive/My Drive/MedicalZooPytorch/lib/medzoo/Vnet.py", line 58, in forward out = self.conv1(x) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(input, kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 480, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected 5-dimensional input for 5-dimensional weight 16 4 5 5 5, but got 4-dimensional input of size [4, 256, 64, 64] instead

Could you help me solve that?

wangyaojlu commented 4 years ago

I meet the same problem. I change the code in medloaders.brats2018.py line 110 as below:

    if self.mode == 'train' and self.augmentation:
        [img_t1, img_t1ce, img_t2, img_flair], img_seg = self.transform([img_t1, img_t1ce, img_t2, img_flair],
                                                                        img_seg)

    return torch.FloatTensor(img_t1.copy()).unsqueeze(0), torch.FloatTensor(img_t1ce.copy()).unsqueeze(
        0), torch.FloatTensor(img_t2.copy()).unsqueeze(0), torch.FloatTensor(img_flair.copy()).unsqueeze(
        0), torch.FloatTensor(img_seg.copy())
iliasprc commented 4 years ago

Sorry for the late reply, the answer of @wangyaojlu is correct , there was an error with the shape of the data during validation It has been updated