Closed RongzhaoZhang closed 7 years ago
Sorry, for the delay. The dimensions are BatchSize, Channels, Z, Y, X. The point is to create 16 channels, not to increase the batch size by 16x.
If my understanding is right, the concatenation in the
InputTransition
block should be applied alongdim=1
instead ofdim=0
, because the second dimension is channel. i.e.# split input in to 16 channels x16 = torch.cat((x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x), 0)
should be
# split input in to 16 channels x16 = torch.cat((x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x), 1)
In the meanwhile, the x's channel number is changed via conv1, so that it needs to save the original input(whose channel number is 1), namely
x16 = torch.cat((input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x), 1)
Sorry, for the delay. The dimensions are BatchSize, Channels, Z, Y, X. The point is to create 16 channels, not to increase the batch size by 16x.
Thx very much for the apply!
If my understanding is right, the concatenation in the
InputTransition
block should be applied alongdim=1
instead ofdim=0
, because the second dimension is channel. i.e.should be