hpc203 / yolov5-dnn-cpp-python-v2

用opencv的dnn模块做yolov5目标检测,包含C++和Python两个版本的程序,优化后的
114 stars 30 forks source link

转换yolov5n时发生错误 #12

Closed JiaPai12138 closed 2 years ago

JiaPai12138 commented 2 years ago

您好,大佬: 在这个 fork 中,我根据运行您另一个仓库https://github.com/hpc203/yolov5-face-landmarks-opencv-v2 里的main_export_onnx.py文件调整了一些参数以及更新了common.py部分代码,成功转换出yolov5s.onnx并读取成功,实际测试时在 1660 Ti max q 上跑纯推理平均10ms,同时运行浏览器看视频的话12ms。

然而,在转换yolov5n的时候出现了如下错误:

RuntimeError: Given groups=1, weight of size [32, 128, 1, 1], expected input[1, 256, 26, 26] to have 128 channels, but got 256 channels instead

部分Trace: ...File "F:\Downloads\yolov5-dnn-cpp-python-v2-main\convert-onnx\common.py", line 41, in forward return self.act(self.bn(self.conv(x))) ...File "F:\Downloads\yolov5-dnn-cpp-python-v2-main\convert-onnx\convert_onnx.py", line 107, in torch.onnx.export(onnx_model, inputs, output_onnx, verbose=False, opset_version=12, input_names=['images'], output_names=['out']) ...File "F:\Downloads\yolov5-dnn-cpp-python-v2-main\convert-onnx\yolov5n.py", line 38, in forward x = self.seq13_C3(x) ...File "F:\Downloads\yolov5-dnn-cpp-python-v2-main\convert-onnx\common.py", line 248, in forward return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))

其他错误都是torch包里的。

以及一些运行时print的:

Namespace(net_type='yolov5n', num_classes=2) 349 348 13.cv1.conv.weight backbone_head.seq13_C3.cv1.conv.weight torch.Size([64, 256, 1, 1]) torch.Size([32, 128, 1, 1]) 13.cv1.bn.weight backbone_head.seq13_C3.cv1.bn.weight torch.Size([64]) torch.Size([32]) 13.cv1.bn.bias backbone_head.seq13_C3.cv1.bn.bias torch.Size([64]) torch.Size([32]) 13.cv1.bn.running_mean backbone_head.seq13_C3.cv1.bn.running_mean torch.Size([64]) torch.Size([32]) 13.cv1.bn.running_var backbone_head.seq13_C3.cv1.bn.running_var torch.Size([64]) torch.Size([32]) 13.cv2.conv.weight backbone_head.seq13_C3.cv2.conv.weight torch.Size([64, 256, 1, 1]) torch.Size([32, 128, 1, 1]) 13.cv2.bn.weight backbone_head.seq13_C3.cv2.bn.weight torch.Size([64]) torch.Size([32]) 13.cv2.bn.bias backbone_head.seq13_C3.cv2.bn.bias torch.Size([64]) torch.Size([32]) 13.cv2.bn.running_mean backbone_head.seq13_C3.cv2.bn.running_mean torch.Size([64]) torch.Size([32]) 13.cv2.bn.running_var backbone_head.seq13_C3.cv2.bn.running_var torch.Size([64]) torch.Size([32]) 13.cv3.conv.weight backbone_head.seq13_C3.cv3.conv.weight torch.Size([128, 128, 1, 1]) torch.Size([64, 64, 1, 1]) 13.cv3.bn.weight backbone_head.seq13_C3.cv3.bn.weight torch.Size([128]) torch.Size([64]) 13.cv3.bn.bias backbone_head.seq13_C3.cv3.bn.bias torch.Size([128]) torch.Size([64]) 13.cv3.bn.running_mean backbone_head.seq13_C3.cv3.bn.running_mean torch.Size([128]) torch.Size([64]) 13.cv3.bn.running_var backbone_head.seq13_C3.cv3.bn.running_var torch.Size([128]) torch.Size([64]) 13.m.0.cv1.conv.weight backbone_head.seq13_C3.m.0.cv1.conv.weight torch.Size([64, 64, 1, 1]) torch.Size([32, 32, 1, 1]) 13.m.0.cv1.bn.weight backbone_head.seq13_C3.m.0.cv1.bn.weight torch.Size([64]) torch.Size([32]) 13.m.0.cv1.bn.bias backbone_head.seq13_C3.m.0.cv1.bn.bias torch.Size([64]) torch.Size([32]) 13.m.0.cv1.bn.running_mean backbone_head.seq13_C3.m.0.cv1.bn.running_mean torch.Size([64]) torch.Size([32]) 13.m.0.cv1.bn.running_var backbone_head.seq13_C3.m.0.cv1.bn.running_var torch.Size([64]) torch.Size([32]) 13.m.0.cv2.conv.weight backbone_head.seq13_C3.m.0.cv2.conv.weight torch.Size([64, 64, 3, 3]) torch.Size([32, 32, 3, 3]) 13.m.0.cv2.bn.weight backbone_head.seq13_C3.m.0.cv2.bn.weight torch.Size([64]) torch.Size([32]) 13.m.0.cv2.bn.bias backbone_head.seq13_C3.m.0.cv2.bn.bias torch.Size([64]) torch.Size([32]) 13.m.0.cv2.bn.running_mean backbone_head.seq13_C3.m.0.cv2.bn.running_mean torch.Size([64]) torch.Size([32]) 13.m.0.cv2.bn.running_var backbone_head.seq13_C3.m.0.cv2.bn.running_var torch.Size([64]) torch.Size([32]) anchor 24.m.0.weight yolo_layers.m.0.bias torch.Size([255, 64, 1, 1]) torch.Size([21]) 24.m.0.bias yolo_layers.m.1.weight torch.Size([255]) torch.Size([21, 128, 1, 1]) 24.m.1.weight yolo_layers.m.1.bias torch.Size([255, 128, 1, 1]) torch.Size([21]) 24.m.1.bias yolo_layers.m.2.weight torch.Size([255]) torch.Size([21, 256, 1, 1]) 24.m.2.weight yolo_layers.m.2.bias torch.Size([255, 256, 1, 1]) torch.Size([21]) 349 348 13.cv1.conv.weight backbone.seq13_C3.cv1.conv.weight torch.Size([64, 256, 1, 1]) torch.Size([32, 128, 1, 1]) 13.cv1.bn.weight backbone.seq13_C3.cv1.bn.weight torch.Size([64]) torch.Size([32]) 13.cv1.bn.bias backbone.seq13_C3.cv1.bn.bias torch.Size([64]) torch.Size([32]) 13.cv1.bn.running_mean backbone.seq13_C3.cv1.bn.running_mean torch.Size([64]) torch.Size([32]) 13.cv1.bn.running_var backbone.seq13_C3.cv1.bn.running_var torch.Size([64]) torch.Size([32]) 13.cv2.conv.weight backbone.seq13_C3.cv2.conv.weight torch.Size([64, 256, 1, 1]) torch.Size([32, 128, 1, 1]) 13.cv2.bn.weight backbone.seq13_C3.cv2.bn.weight torch.Size([64]) torch.Size([32]) 13.cv2.bn.bias backbone.seq13_C3.cv2.bn.bias torch.Size([64]) torch.Size([32]) 13.cv2.bn.running_mean backbone.seq13_C3.cv2.bn.running_mean torch.Size([64]) torch.Size([32]) 13.cv2.bn.running_var backbone.seq13_C3.cv2.bn.running_var torch.Size([64]) torch.Size([32]) 13.cv3.conv.weight backbone.seq13_C3.cv3.conv.weight torch.Size([128, 128, 1, 1]) torch.Size([64, 64, 1, 1]) 13.cv3.bn.weight backbone.seq13_C3.cv3.bn.weight torch.Size([128]) torch.Size([64]) 13.cv3.bn.bias backbone.seq13_C3.cv3.bn.bias torch.Size([128]) torch.Size([64]) 13.cv3.bn.running_mean backbone.seq13_C3.cv3.bn.running_mean torch.Size([128]) torch.Size([64]) 13.cv3.bn.running_var backbone.seq13_C3.cv3.bn.running_var torch.Size([128]) torch.Size([64]) 13.m.0.cv1.conv.weight backbone.seq13_C3.m.0.cv1.conv.weight torch.Size([64, 64, 1, 1]) torch.Size([32, 32, 1, 1]) 13.m.0.cv1.bn.weight backbone.seq13_C3.m.0.cv1.bn.weight torch.Size([64]) torch.Size([32]) 13.m.0.cv1.bn.bias backbone.seq13_C3.m.0.cv1.bn.bias torch.Size([64]) torch.Size([32]) 13.m.0.cv1.bn.running_mean backbone.seq13_C3.m.0.cv1.bn.running_mean torch.Size([64]) torch.Size([32]) 13.m.0.cv1.bn.running_var backbone.seq13_C3.m.0.cv1.bn.running_var torch.Size([64]) torch.Size([32]) 13.m.0.cv2.conv.weight backbone.seq13_C3.m.0.cv2.conv.weight torch.Size([64, 64, 3, 3]) torch.Size([32, 32, 3, 3]) 13.m.0.cv2.bn.weight backbone.seq13_C3.m.0.cv2.bn.weight torch.Size([64]) torch.Size([32]) 13.m.0.cv2.bn.bias backbone.seq13_C3.m.0.cv2.bn.bias torch.Size([64]) torch.Size([32]) 13.m.0.cv2.bn.running_mean backbone.seq13_C3.m.0.cv2.bn.running_mean torch.Size([64]) torch.Size([32]) 13.m.0.cv2.bn.running_var backbone.seq13_C3.m.0.cv2.bn.running_var torch.Size([64]) torch.Size([32]) anchor 24.m.0.weight m0.bias torch.Size([255, 64, 1, 1]) torch.Size([21]) 24.m.0.bias m1.weight torch.Size([255]) torch.Size([21, 128, 1, 1]) 24.m.1.weight m1.bias torch.Size([255, 128, 1, 1]) torch.Size([21]) 24.m.1.bias m2.weight torch.Size([255]) torch.Size([21, 256, 1, 1]) 24.m.2.weight m2.bias torch.Size([255, 256, 1, 1]) torch.Size([21])

请问我应该如何调整代码或网络参数来避免这个错误呢?

JiaPai12138 commented 2 years ago

PS: 我的yolov5s和yolov5n都是6.0版下训练的