Open lqili opened 3 years ago
I noticed that you are using the cbam module but operated in original dca_resnet.py model. If so, you can directly running cbam_resnet_dca.py.
For more details, please share the model file and I can debug it. Thanks.
Hi, sorry to bother you, I want to put your code in my network, but there is a problem with dimensions, can you give me some advice?
Traceback (most recent call last):
File "main.py", line 43, in
Hi @975624756 , from the error log, File "/home/lan/Multi_gesture/LightMBN-master/model/DCAN2.py", line 131, in forward out = self.conv1(x[0])
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 64, 1, 1], but got 3-dimensional input of size [64, 96, 32] instead
You may need to double-check this line.
Hi, sorry to bother you, I have a problem with dimensions too! Why does conv1x[0] require a 3-dimensional image, but the input is clearly 4-dimensional, after what operation?
Hello, author. Thank you for your work. I want to put your network in my code run, but there is a problem, I checked, but do not know what the problem is. Can you give me some suggestions.
ubuntu 16.4;pytorch 0.4
Traceback (most recent call last): File "/media/image1903/Linux/liang/Video-Person-ReID-master/main_video_person_reid.py", line 360, in
main()
File "/media/image1903/Linux/liang/Video-Person-ReID-master/main_video_person_reid.py", line 231, in main
train(model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu)
File "/media/image1903/Linux/liang/Video-Person-ReID-master/main_video_person_reid.py", line 265, in train
outputs, features = model(imgs)
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, kwargs)
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward
return self.module(*inputs[0], *kwargs[0])
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(input, kwargs)
File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 317, in forward
x = self.layer4(x)
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, kwargs)
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, *kwargs)
File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 227, in forward
out = self.cbam({0: out, 1: x[1], 2: x[2]})
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(input, kwargs)
File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 147, in forward
x_out = self.SpatialGate({0:x_out[0],1:x[2]})
File "/home/image1903/anaconda3/envs/learn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, *kwargs)
File "/media/image1903/Linux/liang/Video-Person-ReID-master/models/dca_resnet.py", line 129, in forward
x_compress = self.bnrelu(self.p1self.compress(x[0])+self.p2*self.compress(pre_spatial_att))
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 3