midasklr / yolov5prune

553 stars 113 forks source link

减枝遇到维度不匹配的问题 #45

Open RogerHuangPKX opened 3 years ago

RogerHuangPKX commented 3 years ago

您好 我按照readme 稀疏训练以后,使用prune.py时,出现了RuntimeError: Given groups=1, weight of size [115, 128, 1, 1], expected input[1, 111, 80, 80] to have 128 channels, but got 111 channels instead

去查了一下,好像是img读取的问题?但是我看别人没有提这个issue,所以想看看您能不能帮忙解决一下~

| layer name | origin channels | remaining channels | | model.0.conv.bn | 32 | 32 | | model.1.bn | 64 | 64 | | model.2.cv1.bn | 32 | 32 | | model.2.cv2.bn | 32 | 32 | | model.2.cv3.bn | 64 | 64 | | model.2.m.0.cv1.bn | 32 | 32 | | model.2.m.0.cv2.bn | 32 | 32 | | model.3.bn | 128 | 121 | | model.4.cv1.bn | 64 | 64 | | model.4.cv2.bn | 64 | 47 | | model.4.cv3.bn | 128 | 115 | | model.4.m.0.cv1.bn | 64 | 64 | | model.4.m.0.cv2.bn | 64 | 64 | | model.4.m.1.cv1.bn | 64 | 64 | | model.4.m.1.cv2.bn | 64 | 64 | | model.4.m.2.cv1.bn | 64 | 64 | | model.4.m.2.cv2.bn | 64 | 64 | | model.5.bn | 256 | 217 | | model.6.cv1.bn | 128 | 128 | | model.6.cv2.bn | 128 | 52 | | model.6.cv3.bn | 256 | 208 | | model.6.m.0.cv1.bn | 128 | 128 | | model.6.m.0.cv2.bn | 128 | 128 | | model.6.m.1.cv1.bn | 128 | 128 | | model.6.m.1.cv2.bn | 128 | 128 | | model.6.m.2.cv1.bn | 128 | 128 | | model.6.m.2.cv2.bn | 128 | 128 | | model.7.bn | 512 | 446 | | model.8.cv1.bn | 256 | 181 | | model.8.cv2.bn | 512 | 254 | | model.9.cv1.bn | 256 | 60 | | model.9.cv2.bn | 256 | 83 | | model.9.cv3.bn | 512 | 148 | | model.9.m.0.cv1.bn | 256 | 57 | | model.9.m.0.cv2.bn | 256 | 96 | | model.10.bn | 256 | 111 | | model.13.cv1.bn | 128 | 60 | | model.13.cv2.bn | 128 | 92 | | model.13.cv3.bn | 256 | 153 | | model.13.m.0.cv1.bn | 128 | 63 | | model.13.m.0.cv2.bn | 128 | 87 | | model.14.bn | 128 | 98 | | model.17.cv1.bn | 64 | 42 | | model.17.cv2.bn | 64 | 61 | | model.17.cv3.bn | 128 | 109 | | model.17.m.0.cv1.bn | 64 | 36 | | model.17.m.0.cv2.bn | 64 | 44 | | model.18.bn | 128 | 51 | | model.20.cv1.bn | 128 | 35 | | model.20.cv2.bn | 128 | 61 | | model.20.cv3.bn | 256 | 94 | | model.20.m.0.cv1.bn | 128 | 45 | | model.20.m.0.cv2.bn | 128 | 64 | | model.21.bn | 256 | 106 | | model.23.cv1.bn | 256 | 75 | | model.23.cv2.bn | 256 | 44 | | model.23.cv3.bn | 512 | 168 | | model.23.m.0.cv1.bn | 256 | 57 | | model.23.m.0.cv2.bn | 256 | 63 |

             from  n    params  module                                  arguments

0 -1 1 3520 models.common.Focus [3, 32, 3] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.pruned_common.C3Pruned [64, 32, 32, 64, [[32, 32, 32]], 1, 128] 3 -1 1 69938 models.common.Conv [64, 121, 3, 2] 4 -1 1 150296 models.pruned_common.C3Pruned [121, 64, 47, 115, [[64, 64, 64], [64, 64, 64], [64, 64, 64]], 3, 256] 5 -1 1 225029 models.common.Conv [115, 217, 3, 2] 6 -1 1 570332 models.pruned_common.C3Pruned [217, 128, 52, 208, [[128, 128, 128], [128, 128, 128], [128, 128, 128]], 3, 512] 7 -1 1 835804 models.common.Conv [208, 446, 3, 2] 8 -1 1 265492 models.pruned_common.SPPPruned [446, 181, 254, [5, 9, 13]] 9 -1 1 116370 models.pruned_common.C3Pruned [254, 60, 83, 148, [[60, 57, 96]], 1, False] 10 -1 1 16650 models.common.Conv [148, 111, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 129894 models.pruned_common.C3Pruned [319, 60, 92, 153, [[60, 63, 87]], 1, False] 14 -1 1 15190 models.common.Conv [153, 98, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 49736 models.pruned_common.C3Pruned [213, 42, 61, 109, [[42, 36, 44]], 1, False] 18 -1 1 50133 models.common.Conv [109, 51, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 54147 models.pruned_common.C3Pruned [149, 35, 61, 94, [[35, 45, 64]], 1, False] 21 -1 1 89888 models.common.Conv [94, 106, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 81207 models.pruned_common.C3Pruned [217, 75, 44, 168, [[75, 57, 63]], 1, False] detect input : ['model.0.conv.bn', 'model.1.bn', 'model.2.cv3.bn', 'model.3.bn', 'model.4.cv3.bn', 'model.5.bn', 'model.6.cv3.bn', 'model.7.bn', 'model.8.cv2.bn', 'model.9.cv3.bn', 'model.10.bn', 'model.10.bn', ['model.10.bn', 'model.6.cv3.bn'], 'model.13.cv3.bn', 'model.14.bn', 'model.14.bn', ['model.14.bn', 'model.4.cv3.bn'], 'model.17.cv3.bn', 'model.18.bn', ['model.18.bn', 'model.14.bn'], 'model.20.cv3.bn', 'model.21.bn', ['model.21.bn', 'model.10.bn'], 'model.23.cv3.bn'] 24 [17, 20, 23] 24 [17, 20, 23] 1 10098 models.yolo.Detect [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [109, 94, 168]] Model Summary: 283 layers, 2771100 parameters, 2771100 gradients

Traceback (most recent call last): File "prune.py", line 809, in opt=opt File "prune.py", line 548, in test_prune model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once File "/home/ubuntu/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, kwargs) File "/home/ubuntu/yolov5_prune/models/yolo.py", line 277, in forward return self.forward_once(x, profile) # single-scale inference, train File "/home/ubuntu/yolov5_prune/models/yolo.py", line 308, in forward_once x = m(x) # run File "/home/ubuntu/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, *kwargs) File "/home/ubuntu/yolov5_prune/models/pruned_common.py", line 39, in forward return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1)) File "/home/ubuntu/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(input, kwargs) File "/home/ubuntu/yolov5_prune/models/common.py", line 42, in forward return self.act(self.bn(self.conv(x))) File "/home/ubuntu/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 443, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/ubuntu/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 440, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [115, 128, 1, 1], expected input[1, 111, 80, 80] to have 128 channels, but got 111 channels instead

qhzzzz commented 2 years ago

?解决了吗,遇到了同样问题!

hahaqiu123 commented 1 year ago

求解决

Zzzfar commented 1 year ago

请问你解决了吗?