huanghoujing / beyond-part-models

PCB of paper: Beyond Part Models: Person Retrieval with Refined Part Pooling, using Pytorch
331 stars 81 forks source link

Error on inference #20

Closed anshu1106 closed 6 years ago

anshu1106 commented 6 years ago

Getting this error when trying to do a prediction ValueError: Expected more than 1 value per channel when training, got input size [1, 256, 1, 1]

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-14-e0592f495007> in <module>()
      1 img_variable1 = Variable(img_tensor1)
----> 2 fc_out1 = model(img_variable1)
      3 
      4 # global_feat1, local_feat1 = fc_out1
      5 # print(global_feat1.size())

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

~/meet-up/internship/Person-ReId/beyond-part-models/bpm/model/PCBModel.py in forward(self, x)
     58         (stripe_h, feat.size(-1)))
     59       # shape [N, c, 1, 1]
---> 60       local_feat = self.local_conv_list[i](local_feat)
     61       # shape [N, c]
     62       local_feat = local_feat.view(local_feat.size(0), -1)

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
     65     def forward(self, input):
     66         for module in self._modules.values():
---> 67             input = module(input)
     68         return input
     69 

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py in forward(self, input)
     35         return F.batch_norm(
     36             input, self.running_mean, self.running_var, self.weight, self.bias,
---> 37             self.training, self.momentum, self.eps)
     38 
     39     def __repr__(self):

/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
   1009         size = list(input.size())
   1010         if reduce(mul, size[2:], size[0]) == 1:
-> 1011             raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
   1012     f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled)
   1013     return f(input, weight, bias)

ValueError: Expected more than 1 value per channel when training, got input size [1, 256, 1, 1]

Please help. Pytorch version is 0.3

huanghoujing commented 6 years ago

This is because you didn't meet the usage condition of pytorch BN layer. You can set batch size to larger than 1 to solve this problem.

anshu1106 commented 6 years ago

I am trying the code below. Where exactly do you want me to put BN>1. Would be really helpful if you can help.

from bpm.model.PCBModel import PCBModel model = PCBModel() checkpoint = torch.load('bpm_model_weight.pth') model.load_state_dict(checkpoint)

normalize = transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) preprocess = transforms.Compose([ transforms.Scale(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize ])

img1 = Image.open('feature1/0.jpg')

img_tensor1 = preprocess(img1) imgtensor1.unsqueeze(0)

img_variable1 = Variable(img_tensor1) fc_out1 = model(img_variable1)

ValueError Traceback (most recent call last)

in () 1 img_variable1 = Variable(img_tensor1) ----> 2 fc_out1 = model(img_variable1) 3 4 # global_feat1, local_feat1 = fc_out1 5 # print(global_feat1.size()) /anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 355 result = self._slow_forward(*input, **kwargs) 356 else: --> 357 result = self.forward(*input, **kwargs) 358 for hook in self._forward_hooks.values(): 359 hook_result = hook(self, input, result) ~/meet-up/internship/Person-ReId/beyond-part-models/bpm/model/PCBModel.py in forward(self, x) 58 (stripe_h, feat.size(-1))) 59 # shape [N, c, 1, 1] ---> 60 local_feat = self.local_conv_list[i](local_feat) 61 # shape [N, c] 62 local_feat = local_feat.view(local_feat.size(0), -1) /anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 355 result = self._slow_forward(*input, **kwargs) 356 else: --> 357 result = self.forward(*input, **kwargs) 358 for hook in self._forward_hooks.values(): 359 hook_result = hook(self, input, result) /anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input) 65 def forward(self, input): 66 for module in self._modules.values(): ---> 67 input = module(input) 68 return input 69 /anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 355 result = self._slow_forward(*input, **kwargs) 356 else: --> 357 result = self.forward(*input, **kwargs) 358 for hook in self._forward_hooks.values(): 359 hook_result = hook(self, input, result) /anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py in forward(self, input) 35 return F.batch_norm( 36 input, self.running_mean, self.running_var, self.weight, self.bias, ---> 37 self.training, self.momentum, self.eps) 38 39 def __repr__(self): /anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps) 1009 size = list(input.size()) 1010 if reduce(mul, size[2:], size[0]) == 1: -> 1011 raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) 1012 f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled) 1013 return f(input, weight, bias) ValueError: Expected more than 1 value per channel when training, got input size [1, 256, 1, 1]
huanghoujing commented 6 years ago

Well, here the error says ValueError: Expected more than 1 value per channel when training, got input size [1, 256, 1, 1]. So I think you can set the model to eval mode to allow single-image batch. So before the line fc_out1 = model(img_variable1) in your code, you can insert this line model.eval().

anshu1106 commented 6 years ago

Thanks @huanghoujing. It works.