Tianxiaomo / pytorch-YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4
Apache License 2.0
4.47k stars 1.49k forks source link

作者你好,demo_darknet2onnx修改output_name跑不通,demo_pytorch2onnx修改output_name可以跑通 #497

Closed Nabyssache closed 2 years ago

Nabyssache commented 2 years ago

demo_pytorch2onnx.py中 transform_to_onnx 函数改成如下 output_names 从['boxes', 'confs']换成了别的 ··· def transform_to_onnx(weight_file, batch_size, n_classes, IN_IMAGE_H, IN_IMAGE_W):

model = Yolov4(n_classes=n_classes, inference=False)

pretrained_dict = torch.load(weight_file, map_location=torch.device('cpu'))
model.load_state_dict(pretrained_dict,strict=False)

input_names = ["input"]
# output_names = ['boxes', 'confs']
output_names = ['feature_map_1', 'feature_map_2','feature_map_3']# 修改为3个feature_map

··· 这样是可以跑成功的 但是修改了 demo_darknet2onnx.py 中的outputname 却报错了,要怎么解决? ··· def transform_to_onnx(cfgfile, weightfile, batch_size=1, onnx_file_name=None): model = Darknet(cfgfile)

model.print_network()
model.load_weights(weightfile)
print('Loading weights from %s... Done!' % (weightfile))

dynamic = False
if batch_size <= 0:
    dynamic = True

input_names = ["input"]
output_names = ['feature_map_1', 'feature_map_2','feature_map_3']

··· ··· root@883803c4d07e:/workspace/pytorch-YOLOv4# python3.7 demo_darknet2onnx.py 12-23/yolov4-aqm_gz.cfg 12-23/aqmgz.names 12-23/yolov4-shenran_aqm_gz_2000.weights 12-23/testwcgz.jpg 1 Converting to onnx and running demo ... layer filters size input output 0 conv 32 3 x 3 / 1 608 x 608 x 3 -> 608 x 608 x 32 1 conv 64 3 x 3 / 2 608 x 608 x 32 -> 304 x 304 x 64 2 conv 64 1 x 1 / 1 304 x 304 x 64 -> 304 x 304 x 64 3 route 1 4 conv 64 1 x 1 / 1 304 x 304 x 64 -> 304 x 304 x 64 5 conv 32 1 x 1 / 1 304 x 304 x 64 -> 304 x 304 x 32 6 conv 64 3 x 3 / 1 304 x 304 x 32 -> 304 x 304 x 64 7 shortcut 4 8 conv 64 1 x 1 / 1 304 x 304 x 64 -> 304 x 304 x 64 9 route 8 2 10 conv 64 1 x 1 / 1 304 x 304 x 128 -> 304 x 304 x 64 11 conv 128 3 x 3 / 2 304 x 304 x 64 -> 152 x 152 x 128 12 conv 64 1 x 1 / 1 152 x 152 x 128 -> 152 x 152 x 64 13 route 11 14 conv 64 1 x 1 / 1 152 x 152 x 128 -> 152 x 152 x 64 15 conv 64 1 x 1 / 1 152 x 152 x 64 -> 152 x 152 x 64 16 conv 64 3 x 3 / 1 152 x 152 x 64 -> 152 x 152 x 64 17 shortcut 14 18 conv 64 1 x 1 / 1 152 x 152 x 64 -> 152 x 152 x 64 19 conv 64 3 x 3 / 1 152 x 152 x 64 -> 152 x 152 x 64 20 shortcut 17 21 conv 64 1 x 1 / 1 152 x 152 x 64 -> 152 x 152 x 64 22 route 21 12 23 conv 128 1 x 1 / 1 152 x 152 x 128 -> 152 x 152 x 128 24 conv 256 3 x 3 / 2 152 x 152 x 128 -> 76 x 76 x 256 25 conv 128 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 128 26 route 24 27 conv 128 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 128 28 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 29 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 30 shortcut 27 31 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 32 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 33 shortcut 30 34 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 35 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 36 shortcut 33 37 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 38 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 39 shortcut 36 40 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 41 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 42 shortcut 39 43 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 44 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 45 shortcut 42 46 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 47 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 48 shortcut 45 49 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 50 conv 128 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 128 51 shortcut 48 52 conv 128 1 x 1 / 1 76 x 76 x 128 -> 76 x 76 x 128 53 route 52 25 54 conv 256 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 256 55 conv 512 3 x 3 / 2 76 x 76 x 256 -> 38 x 38 x 512 56 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 57 route 55 58 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 59 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 60 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 61 shortcut 58 62 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 63 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 64 shortcut 61 65 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 66 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 67 shortcut 64 68 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 69 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 70 shortcut 67 71 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 72 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 73 shortcut 70 74 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 75 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 76 shortcut 73 77 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 78 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 79 shortcut 76 80 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 81 conv 256 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 256 82 shortcut 79 83 conv 256 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 256 84 route 83 56 85 conv 512 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 512 86 conv 1024 3 x 3 / 2 38 x 38 x 512 -> 19 x 19 x1024 87 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 88 route 86 89 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 90 conv 512 1 x 1 / 1 19 x 19 x 512 -> 19 x 19 x 512 91 conv 512 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x 512 92 shortcut 89 93 conv 512 1 x 1 / 1 19 x 19 x 512 -> 19 x 19 x 512 94 conv 512 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x 512 95 shortcut 92 96 conv 512 1 x 1 / 1 19 x 19 x 512 -> 19 x 19 x 512 97 conv 512 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x 512 98 shortcut 95 99 conv 512 1 x 1 / 1 19 x 19 x 512 -> 19 x 19 x 512 100 conv 512 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x 512 101 shortcut 98 102 conv 512 1 x 1 / 1 19 x 19 x 512 -> 19 x 19 x 512 103 route 102 87 104 conv 1024 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x1024 105 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 106 conv 1024 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x1024 107 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 108 max 5 x 5 / 1 19 x 19 x 512 -> 19 x 19 x 512 109 route 107 110 max 9 x 9 / 1 19 x 19 x 512 -> 19 x 19 x 512 111 route 107 112 max 13 x 13 / 1 19 x 19 x 512 -> 19 x 19 x 512 113 route 112 110 108 107 114 conv 512 1 x 1 / 1 19 x 19 x2048 -> 19 x 19 x 512 115 conv 1024 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x1024 116 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 117 conv 256 1 x 1 / 1 19 x 19 x 512 -> 19 x 19 x 256 118 upsample 2 19 x 19 x 256 -> 38 x 38 x 256 119 route 85 120 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 121 route 120 118 122 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 123 conv 512 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 512 124 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 125 conv 512 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 512 126 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 127 conv 128 1 x 1 / 1 38 x 38 x 256 -> 38 x 38 x 128 128 upsample 2 38 x 38 x 128 -> 76 x 76 x 128 129 route 54 130 conv 128 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 128 131 route 130 128 132 conv 128 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 128 133 conv 256 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 256 134 conv 128 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 128 135 conv 256 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 256 136 conv 128 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 128 137 conv 256 3 x 3 / 1 76 x 76 x 128 -> 76 x 76 x 256 138 conv 27 1 x 1 / 1 76 x 76 x 256 -> 76 x 76 x 27 139 detection 140 route 136 141 conv 256 3 x 3 / 2 76 x 76 x 128 -> 38 x 38 x 256 142 route 141 126 143 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 144 conv 512 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 512 145 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 146 conv 512 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 512 147 conv 256 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 256 148 conv 512 3 x 3 / 1 38 x 38 x 256 -> 38 x 38 x 512 149 conv 27 1 x 1 / 1 38 x 38 x 512 -> 38 x 38 x 27 150 detection 151 route 147 152 conv 512 3 x 3 / 2 38 x 38 x 256 -> 19 x 19 x 512 153 route 152 116 154 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 155 conv 1024 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x1024 156 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 157 conv 1024 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x1024 158 conv 512 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 512 159 conv 1024 3 x 3 / 1 19 x 19 x 512 -> 19 x 19 x1024 160 conv 27 1 x 1 / 1 19 x 19 x1024 -> 19 x 19 x 27 161 detection Loading weights from 12-23/yolov4-shenran_aqm_gz_2000.weights... Done! /opt/conda/lib/python3.7/site-packages/numpy/core/function_base.py:117: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! num = operator.index(num) /workspace/pytorch-YOLOv4/tool/yolo_layer.py:227: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. bx = bxy[:, ii : ii + 1] + torch.tensor(grid_x, device=device, dtype=torch.float32) # grid_x.to(device=device, dtype=torch.float32) /workspace/pytorch-YOLOv4/tool/yolo_layer.py:229: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. by = bxy[:, ii + 1 : ii + 2] + torch.tensor(grid_y, device=device, dtype=torch.float32) # grid_y.to(device=device, dtype=torch.float32) Traceback (most recent call last): File "demo_darknet2onnx.py", line 64, in main(cfg_file, namesfile, weight_file, image_path, batch_size) File "demo_darknet2onnx.py", line 19, in main transform_to_onnx(cfg_file, weight_file, batch_size) File "/workspace/pytorch-YOLOv4/tool/darknet2onnx.py", line 50, in transform_to_onnx dynamic_axes=None) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/init.py", line 208, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export use_external_data_format=use_external_data_format) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 530, in _export fixed_batch_size=fixed_batch_size) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 397, in _model_to_graph _set_input_and_output_names(graph, input_names, output_names) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 607, in _set_input_and_output_names set_names(list(graph.outputs()), output_names, 'output') File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 602, in set_names % (descriptor, len(name_list), descriptor, len(node_list))) RuntimeError: number of output names provided (3) exceeded number of outputs (2) ···

Nabyssache commented 2 years ago

我用netron看darknet转的onnx非常奇怪,没有3个yolo head,只有一个输出box and conf,但是经过修改的pth转onnx之后可以看到3个yolo head 。 修改了output_names = ['feature_map_1', 'feature_map_2','feature_map_3']# 修改为3个feature_map 之后 ,darknet转onnx报错如上面所示,为什么pytorch_onnx 修改就能转成功?

Nabyssache commented 2 years ago

跑通了,先用weights转成pth,再转onnx就成功了!