Closed emredog closed 2 years ago
Hello,
I have a model that makes use of the nn.SpatialBatchNormalization, and I got the error in the title.
nn.SpatialBatchNormalization
I want to make inference on a single image (3x224x224). I've tried both "unsqueezed" (1x3x224x224) and the original, both lead to the same error.
The relevant part of the network is as follows:
(1): nn.SpatialConvolution(3 -> 32, 3x3, 1,1, 1,1) without bias (2): nn.SpatialBatchNormalization (4D) (0)
I've added the following prints to the checkInput function that throws the error:
checkInput
function BN:checkInputDim(input) local iDim = input:dim() assert(iDim == self.nDim or (iDim == self.nDim - 1 and self.train == false), string.format( 'only mini-batch supported (%dD tensor), got %dD tensor instead', self.nDim, iDim)) local featDim = (iDim == self.nDim - 1) and 1 or 2 print("train mode?", self.train) print("featDim", featDim) print("runningMean:nElement", self.running_mean:nElement()) print("input size:", input:size()) assert(input:size(featDim) == self.running_mean:nElement(), string.format( 'got %d-feature tensor, expected %d', input:size(featDim), self.running_mean:nElement())) end
this yields the following output for the unsqueezed image:
train mode? false featDim 2 runningMean:nElement 0 input size: 1 32 224 224 [torch.LongStorage of size 4]
Any help or lead to what to try next is much appreciated, thanks!
I believe this is caused by the lack of the running_mean being null (0 element). Another person encountered probably the same error as me: https://github.com/gaobb/DLDL-v2/issues/4
Hello,
I have a model that makes use of the
nn.SpatialBatchNormalization
, and I got the error in the title.I want to make inference on a single image (3x224x224). I've tried both "unsqueezed" (1x3x224x224) and the original, both lead to the same error.
The relevant part of the network is as follows:
I've added the following prints to the
checkInput
function that throws the error:this yields the following output for the unsqueezed image:
Any help or lead to what to try next is much appreciated, thanks!