yaoppeng / U-Net_v2

202 stars 18 forks source link

my own data set input size #8

Open feililucky opened 7 months ago

feililucky commented 7 months ago

Hello, if I use my own data set, change the input size to 112, and then an error will appear.“y = self.deconv2(f41) + f31 RuntimeError: The size of tensor a (8) must match the size of tensor b (7) at non-singleton dimension 3”,can i solve this problem,Thank you very much.

yaoppeng commented 7 months ago

This is because you did the following downsample and upsample: 112->56->28->14->7->4->8

You can 1) resize the input image and label size to 128; 2) resize the upsampled resolution from 8 to 7.

feililucky commented 7 months ago

Thank you for your help.I also want to ask the following For training: model = UNetV2(n_classes=n_classes, deep_supervision=True, pretrained_path=pretrained_path) x = torch.rand((8, 3, 112,112)) outputs = model(x) outputs[-1].shape: (8,1,112,112) outputs[-2].shape: (8,1,56,56) outputs[-3].shape: (8,1,28,28) outputs[-4].shape: (8,1,14,14) How should I write the loss function?Or I write it directly like this outputs = model(x)[::-1][-1] outputs = torch.sigmoid(outputs) loss = Dice_loss(outputs, labels) outputs.shape: (8,1,112,112)

For valing:Should I write like model = UNetV2(n_classes=n_classes, deep_supervision=True, pretrained_path=pretrained_path) or something like model = UNetV2(n_classes=n_classes, deep_supervision=Fales, pretrained_path=pretrained_path Sorry to trouble you, thank you

yaoppeng commented 7 months ago

You can write it like the following:

loss_fn = nn.CrossEntropy()

outputs = ...

loss = loss_fn(output[0], y[0])
for i, output in enumerate(outputs[1:]):
    loss += loss_fn(output, y[i+1])

loss.backward()
....

y is the ground truth list for deep supervision.