Open Frank-Cai0709 opened 2 months ago
另外,deep_supervision.py里我加了一行target = F.interpolate(target, size=(256, 256), mode='bilinear', align_corners=False)将掩码放大到256,256的尺寸,不然计算out2的loss时,gt只有128,128的尺寸,不过我想这个对训练应该没什么影响吧。
for i, inputs in enumerate(zip(*args)):
if i == 0:
continue
output, target = inputs
target = F.interpolate(target, size=(256, 256), mode='bilinear', align_corners=False)
inputs = output, target
l += weights[i] * self.loss(*inputs)
gt和out尺寸不对应,报错如下:
Traceback (most recent call last):
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/run/run_training.py", line 311, in <module>
run_training_entry()
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/run/run_training.py", line 305, in run_training_entry
run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/run/run_training.py", line 230, in run_training
nnunet_trainer.run_training(dataset_id=dataset_id)
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/training/nnUNetTrainer/ISICTrainer.py", line 145, in run_training
train_outputs.append(self.train_step(next(self.dataloader_train)))
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/training/nnUNetTrainer/ISICTrainer.py", line 196, in train_step
l = self.loss(output, target)
File "/home/dell/anaconda3/envs/unetv2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/training/loss/deep_supervision.py", line 41, in forward
l += weights[i] * self.loss(*inputs)
File "/home/dell/anaconda3/envs/unetv2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/training/loss/compound_losses.py", line 54, in forward
ce_loss = self.ce(net_output, target[:, 0].long()) \
File "/home/dell/anaconda3/envs/unetv2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/dell/D/cjt/cjt/unetv2/nnunetv2/training/loss/robust_ce_loss.py", line 19, in forward
loss = super().forward(input, target.long())
File "/home/dell/anaconda3/envs/unetv2/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/dell/anaconda3/envs/unetv2/lib/python3.8/site-packages/torch/nn/functional.py", line 3029, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: input and target batch or spatial sizes don't match: target [49, 128, 128], input [49, 1, 256, 256]
我将nnUNet_plans.json
中的默认设置改了下。对于deep_supervision
有以下代码控制:
if self.deep_supervision:
return seg_outs[::-1]
else:
return seg_outs[-1]
训练和测试时label
应该是0, 1, 2
这样的数字。可视化时可以rescale到[0, 255]。请再试试看是否正常。
作者您好,我想请教一下: 先按照您给的步骤设置了环境变量,并下载了Google Drive的ISIC2017的raw data以及 preprocessed data,但是训练的时候dsc和miou始终都是0%。
我对_internal_maybe_mirror_and_predict函数做了一些修改,因为用深度监督的时候,会输出一个tuple,我取了tuple中的第二个作为输出。如下所示,另外我看raw_data的每张mask都是全黑的,应该是作者预处理把255像素值调到了1吧,以上会对训练有影响吗?