Closed shinomiya-akirawane closed 1 year ago
The tensot a is `tensor([[[0.0094, 0.0129, 0.0148, ..., 0.0104, 0.0110, 0.0102], [0.0078, 0.0092, 0.0045, ..., 0.0089, 0.0092, 0.0069], [0.0054, 0.0058, 0.0087, ..., 0.0116, 0.0114, 0.0071], ..., [0.0087, 0.0107, 0.0076, ..., 0.0150, 0.0129, 0.0111], [0.0094, 0.0073, 0.0089, ..., 0.0057, 0.0117, 0.0067], [0.0085, 0.0109, 0.0082, ..., 0.0091, 0.0103, 0.0077]],
[[0.0026, 0.0038, 0.0028, ..., 0.0045, 0.0039, 0.0065],
[0.0071, 0.0027, 0.0018, ..., 0.0040, 0.0035, 0.0042],
[0.0011, 0.0021, 0.0004, ..., 0.0037, 0.0030, 0.0043],
...,
[0.0095, 0.0077, 0.0096, ..., 0.0067, 0.0060, 0.0072],
[0.0070, 0.0137, 0.0070, ..., 0.0090, 0.0042, 0.0060],
[0.0094, 0.0063, 0.0086, ..., 0.0103, 0.0050, 0.0078]]],
device='cuda:0', grad_fn= The tensor b is:
tensor([[[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
...,
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]]],
[[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
...,
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]]]], device='cuda:0')`
loss_seg_dice += dice_loss(y_prob[:,1,...], label_batch[:labeled_bs,...] == 1). Obviously, the sizes of two input tensors are different. You should check the shape of y_prob[:,1,...] and label_batch[:labeled_bs,...] == 1. You can directly use this data in your experiments to see whether it works or not. https://github.com/yulequan/UA-MT/tree/master/data/2018LA_Seg_Training%20Set
the shape of y_prob[:,1,...] is [2,112,80] and the shape of label_batch[:labeled_bs,...] == 1 is [2,112,112,80] I am using this dataset. Is that mean that maybe this problem is linked to the format of the dataset?
the shape of y_prob[:,1,...] is [2,112,80] and the shape of label_batch[:labeled_bs,...] == 1 is [2,112,112,80] I am using this dataset. Is that mean that maybe this problem is linked to the format of the dataset?
Of course. The first prediction should be of [2 (Batchsize), 112, 112, 80 (Patch Size)]
You should double-check the data loader part
I haven't changed anything in the data loader folder, and it hasn't given an error message when the data is loaded. The structure of LA dataset is: ├── data │ ├── LA │ ├── 2018LA_SegTraining Set │ ├── ... │ └── ... │ ├── train.list │ └── test.list
The output Y should be of size [2, 2, 112, 112,80]. The second one denotes the channel number.
I print the output Y and it is [2,112,112,80]. I have no idea how is the channel number missing.
Sorry, The structure of LA dataset is: ├── data │ ├── LA │ ├── 2018LA_SegTraining Set │ ├── ... │ └── ... │ ├── train.list │ └── test.list
model = net_factory(net_type=args.model, in_chns=1, class_num=num_classes, mode="train") Do make sure the num_classes is 2.
I print to check and it is 2
outputs = model(volume_batch) The outputs should be a list with three tensors of [2, 2, 112, 112,80]. You should carefully check all the codes. BTW, you can run the UAMT model first to be familiar with all the procedures. Most of the parts are the same between ours and the UAMT. https://github.com/yulequan/UA-MT
After I commit the loss_seg, there is a new error message:
Traceback (most recent call last): File "/home/zhaoyan/MC-Net/./code/train_mcnet_3d.py", line 147, in <module> loss_seg_dice += dice_loss(y_prob[:,1,...], label_batch[:labeled_bs,...] == 1) File "/home/zhaoyan/MC-Net/code/utils/losses.py", line 5, in Binary_dice_loss intersection = 2 * torch.sum(predictive * target) + ep RuntimeError: The size of tensor a (2) must match the size of tensor b (112) at non-singleton dimension 1