ma-xu / DCANet

[arXiv 2020] Deep Connected Attention Networks
121 stars 26 forks source link

bug in running resnet_cbam_dca #2

Open lqili opened 3 years ago

lqili commented 3 years ago

Hello, author. Thank you for your work. I want to put your network in my code run, but there is a problem, I checked, but do not know what the problem is. Can you give me some suggestions.

ubuntu 16.4;pytorch 0.4

    x = self.layer3(x)
    x = self.layer4(x)

    x = self.avgpool(x[0])
    x = x.view(b, t, -1)  
    x = x.permute(0, 2, 1)
    f = F.avg_pool1d(x, t)
    f = f.view(b, self.feat_dim)
    y = self.fc(f)

Traceback (most recent call last): File "/media/image1903/Linux/liang/Video-Person-ReID-master/main_video_person_reid.py", line 360, in main() File "/media/image1903/Linux/liang/Video-Person-ReID-master/main_video_person_reid.py", line 231, in main train(model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu) File "/media/image1903/Linux/liang/Video-Person-ReID-master/main_video_person_reid.py", line 265, in train outputs, features = model(imgs) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, kwargs) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward return self.module(*inputs[0], *kwargs[0]) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(input, kwargs) File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 317, in forward x = self.layer4(x) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, kwargs) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, *kwargs) File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 227, in forward out = self.cbam({0: out, 1: x[1], 2: x[2]}) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(input, kwargs) File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 147, in forward x_out = self.SpatialGate({0:x_out[0],1:x[2]}) File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, *kwargs) File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 129, in forward x_compress = self.bnrelu(self.p1self.compress(x[0])+self.p2*self.compress(pre_spatial_att))

RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 3

ma-xu commented 3 years ago

I noticed that you are using the cbam module but operated in original dca_resnet.py model. If so, you can directly running cbam_resnet_dca.py.

For more details, please share the model file and I can debug it. Thanks.

975624756 commented 2 years ago

Hi, sorry to bother you, I want to put your code in my network, but there is a problem with dimensions, can you give me some advice? Traceback (most recent call last): File "main.py", line 43, in ckpt.write_log('[INFO] Model parameters: {com[0]} flops: {com[1]}'.format(com=compute_model_complexity(model, (1, 3, args.height, args.width)) File "/home/lan/Multi_gesture/LightMBN-master/utils/model_complexity.py", line 319, in compute_model_complexity model(input) # forward File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, kwargs) File "/home/lan/Multi_gesture/LightMBN-master/model/lmbn_r.py", line 100, in forward x = self.backone(x) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, *kwargs) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(input, kwargs) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, *kwargs) File "/home/lan/Multi_gesture/LightMBN-master/model/DCAN2.py", line 131, in forward out = self.conv1(x[0]) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(input, **kwargs) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 399, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/lan/.conda/envs/lan3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 396, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 64, 1, 1], but got 3-dimensional input of size [64, 96, 32] instead

ma-xu commented 2 years ago

Hi @975624756 , from the error log, File "/home/lan/Multi_gesture/LightMBN-master/model/DCAN2.py", line 131, in forward out = self.conv1(x[0])

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 64, 1, 1], but got 3-dimensional input of size [64, 96, 32] instead

You may need to double-check this line.

yuwanting828 commented 1 year ago

Hi, sorry to bother you, I have a problem with dimensions too! Why does conv1x[0] require a 3-dimensional image, but the input is clearly 4-dimensional, after what operation?