talebolano / TensorRT-Scaled-YOLOv4

TensorRT for Scaled YOLOv4(yolov4-csp.cfg)
10 stars 4 forks source link

is it support ScaledYOLOv4-large? #1

Closed brucelee78 closed 3 years ago

brucelee78 commented 3 years ago

ScaledYOLOv4-large can use this repo forward?

talebolano commented 3 years ago

@brucelee78 yes. script/ScaledYOLOv4-large provides tools for converting to onnx. Note that the number of anchors in yolov4-large is greater than 3, so you need to change the parameters in config.h and recompile

brucelee78 commented 3 years ago

@brucelee78 yes. script/ScaledYOLOv4-large provides tools for converting to onnx. Note that the number of anchors in yolov4-large is greater than 3, so you need to change the parameters in config.h and recompile

I use script/ScaledYOLOv4-large to export onnx,but get some error first, i copy script/ScaledYOLOv4-large to ScaledYOLOv4-large which is author origin version do python3 export.py get some errors, like below ` from n params module arguments
0 -1 1 1160 models.common.Conv [3, 40, 3, 1]
1 -1 1 28960 models.common.Conv [40, 80, 3, 2]
2 -1 1 30960 models.common.BottleneckCSP [80, 80, 1]
3 -1 1 115520 models.common.Conv [80, 160, 3, 2]
4 -1 1 251360 models.common.BottleneckCSP [160, 160, 3]
5 -1 1 461440 models.common.Conv [160, 320, 3, 2]
6 -1 1 4081600 models.common.BottleneckCSP [320, 320, 15]
7 -1 1 1844480 models.common.Conv [320, 640, 3, 2]
8 -1 1 16304000 models.common.BottleneckCSP [640, 640, 15]
9 -1 1 7375360 models.common.Conv [640, 1280, 3, 2]
10 -1 1 32382720 models.common.BottleneckCSP [1280, 1280, 7]
11 -1 1 14748160 models.common.Conv [1280, 1280, 3, 2]
12 -1 1 32382720 models.common.BottleneckCSP [1280, 1280, 7]
13 -1 1 14748160 models.common.Conv [1280, 1280, 3, 2]
14 -1 1 32382720 models.common.BottleneckCSP [1280, 1280, 7]
15 -1 1 11888640 models.common.SPPCSP [1280, 640, 1]
16 -1 1 410880 models.common.Conv [640, 640, 1, 1]
17 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
18 -6 1 820480 models.common.Conv [1280, 640, 1, 1]
19 [-1, -2] 1 0 models.common.Concat [1]
20 -1 1 14348800 models.common.BottleneckCSP2 [1280, 640, 3]
21 -1 1 410880 models.common.Conv [640, 640, 1, 1]
22 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
23 -13 1 820480 models.common.Conv [1280, 640, 1, 1]
24 [-1, -2] 1 0 models.common.Concat [1]
25 -1 1 14348800 models.common.BottleneckCSP2 [1280, 640, 3]
26 -1 1 205440 models.common.Conv [640, 320, 1, 1]
27 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
28 -20 1 205440 models.common.Conv [640, 320, 1, 1]
29 [-1, -2] 1 0 models.common.Concat [1]
30 -1 1 3590400 models.common.BottleneckCSP2 [640, 320, 3]
31 -1 1 51520 models.common.Conv [320, 160, 1, 1]
32 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
33 -27 1 51520 models.common.Conv [320, 160, 1, 1]
34 [-1, -2] 1 0 models.common.Concat [1]
35 -1 1 899200 models.common.BottleneckCSP2 [320, 160, 3]
36 -1 1 461440 models.common.Conv [160, 320, 3, 1]
37 -2 1 461440 models.common.Conv [160, 320, 3, 2]
38 [-1, 30] 1 0 models.common.Concat [1]
39 -1 1 3590400 models.common.BottleneckCSP2 [640, 320, 3]
40 -1 1 1844480 models.common.Conv [320, 640, 3, 1]
41 -2 1 1844480 models.common.Conv [320, 640, 3, 2]
42 [-1, 25] 1 0 models.common.Concat [1]
43 -1 1 14348800 models.common.BottleneckCSP2 [1280, 640, 3]
44 -1 1 7375360 models.common.Conv [640, 1280, 3, 1]
45 -2 1 3687680 models.common.Conv [640, 640, 3, 2]
46 [-1, 20] 1 0 models.common.Concat [1]
47 -1 1 14348800 models.common.BottleneckCSP2 [1280, 640, 3]
48 -1 1 7375360 models.common.Conv [640, 1280, 3, 1]
49 -2 1 3687680 models.common.Conv [640, 640, 3, 2]
50 [-1, 15] 1 0 models.common.Concat [1]
51 -1 1 14348800 models.common.BottleneckCSP2 [1280, 640, 3]
52 -1 1 7375360 models.common.Conv [640, 1280, 3, 1]
53[36, 40, 44, 48, 52] 1 1633700 models.yolo.Detect [80, [[13, 17, 22, 25, 27, 66, 55, 41], [57, 88, 112, 69, 69, 177, 136, 138], [136, 138, 287, 114, 134, 275, 268, 248], [268, 248, 232, 504, 445, 416, 640, 640], [812, 393, 477, 808, 1070, 908, 1408, 1408]], [320, 640, 1280, 1280, 1280]] Model Summary: 722 layers, 2.87576e+08 parameters, 2.87576e+08 gradients

/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py:243: UserWarning: add_node_names' can be set to True only when 'operator_export_type' isONNX. Since 'operator_export_type' is not set to 'ONNX',add_node_namesargument will be ignored. "{}argument will be ignored.".format(arg_name, arg_name)) /home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py:243: UserWarning:do_constant_folding' can be set to True only when 'operator_export_type' is ONNX. Since 'operator_export_type' is not set to 'ONNX', do_constant_folding argument will be ignored. "{} argument will be ignored.".format(arg_name, arg_name)) Traceback (most recent call last): File "export.py", line 46, in operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/init.py", line 208, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export use_external_data_format=use_external_data_format) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 530, in _export fixed_batch_size=fixed_batch_size) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 366, in _model_to_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/jit/init.py", line 338, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, kwargs) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, kwargs) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/jit/init.py", line 426, in forward self._force_outplace, File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/jit/init.py", line 412, in wrapper outs.append(self.inner(trace_inputs)) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 720, in _call_impl result = self._slow_forward(input, kwargs) File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 704, in _slow_forward result = self.forward(*input, kwargs) File "/home/lyc/.local/share/Trash/files/ScaledYOLOv4-yolov4-large_test.2/models/yolo.py", line 152, in forward return self.forward_once(x, profile) # single-scale inference, train File "/home/lyc/.local/share/Trash/files/ScaledYOLOv4-yolov4-large_test.2/models/yolo.py", line 157, in forward_once if m.f != -1: # if not from previous layer File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in getattr type(self).name, name)) torch.nn.modules.module.ModuleAttributeError: 'Upsample' object has no attribute 'f' `

what it happen? my torch is 1.6.0 and 1.7.1 onnx=1.6.0

talebolano commented 3 years ago

@brucelee78 sorry,please change export.py in ScaledYOLOv4-large,change 38 line:

    for i,m in enumerate(model.model):
        if isinstance(m,nn.Upsample):
            model.model[i] = nn.Upsample(size=(scaledh,scaledw))
            scaledw *= 2
            scaledh *= 2 

as

    for index,m in enumerate(model.model):
        if isinstance(m,nn.Upsample):
            f = m.f
            i = m.i
            model.model[index] = nn.Upsample(size=(scaledh,scaledw))
            model.model[index].f =f
            model.model[index].i =i
            scaledw *= 2
            scaledh *= 2 
brucelee78 commented 3 years ago

@brucelee78 sorry,please change export.py in ScaledYOLOv4-large,change 38 line:

    for i,m in enumerate(model.model):
        if isinstance(m,nn.Upsample):
            model.model[i] = nn.Upsample(size=(scaledh,scaledw))
            scaledw *= 2
            scaledh *= 2 

as

    for index,m in enumerate(model.model):
        if isinstance(m,nn.Upsample):
            f = m.f
            i = m.i
            model.model[index] = nn.Upsample(size=(scaledh,scaledw))
            model.model[index].f =f
            model.model[index].i =i
            scaledw *= 2
            scaledh *= 2 

OK, the error is disappear, thanks for your help,but, i get another error,like below:

Traceback (most recent call last):
  File "/home/lyc/PycharmProjects/ScaledYOLOv4-yolov4-large_test/export.py", line 56, in <module>
    operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/__init__.py", line 208, in export
    custom_opsets, enable_onnx_checker, use_external_data_format)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
    use_external_data_format=use_external_data_format)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 530, in _export
    fixed_batch_size=fixed_batch_size)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 366, in _model_to_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 338, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 426, in forward
    self._force_outplace,
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 412, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 720, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 704, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/lyc/PycharmProjects/ScaledYOLOv4-yolov4-large_test/models/yolo.py", line 109, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/lyc/PycharmProjects/ScaledYOLOv4-yolov4-large_test/models/yolo.py", line 129, in forward_once
    x = m(x)  # run
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 720, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 704, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/upsampling.py", line 141, in forward
    return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)
  File "/home/lyc/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 3143, in interpolate
    return torch._C._nn.upsample_nearest2d(input, output_size, sfl[0], sfl[1])
TypeError: upsample_nearest2d(): argument 'output_size' must be tuple of ints, but found element of type float at pos 1

I feel it's torch.onnx.export function get error, but i don't know how to solve it, please see it ! THANKs

talebolano commented 3 years ago

@brucelee78 try change export.py in ScaledYOLOv4-large,change 36 line:

    scaledh = opt.h/(2**(2+numup))
    scaledw = opt.w/(2**(2+numup))

as

    scaledh = int(opt.h/(2**(2+numup)))
    scaledw = int(opt.w/(2**(2+numup)))
brucelee78 commented 3 years ago

@brucelee78 try change export.py in ScaledYOLOv4-large,change 36 line:

    scaledh = opt.h/(2**(2+numup))
    scaledw = opt.w/(2**(2+numup))

as

    scaledh = int(opt.h/(2**(2+numup)))
    scaledw = int(opt.w/(2**(2+numup)))

Thank you ! I convert the model to onnx success! I use your repo to convert the onnx to trt ,I use the “yolov4-P7”model,follow your instruction, I modify theconfig.h like below:

const int max_per_img = 100;
const float vis_thresh=0.4;
const float nms_thresh=0.45;

const int inputsize[2] = {640,640};
const int num_anchors = 4;
const int classes = 80;
/*csp/p5*/
//const int yolo1[2] = {inputsize[0] / 32, inputsize[1] /32};
//const int yolo2[2] = {inputsize[0] / 16, inputsize[1] /16};
//const int yolo3[2] = {inputsize[0] / 8, inputsize[1] /8};
// if yolo model have 4 output(like yolov4-p6) add const int yolo4[2] = {inputsize[0] / 16, inputsize[1] /16};

const int yolo1[2] = {inputsize[0] / 128, inputsize[1] /128};
const int yolo2[2] = {inputsize[0] / 64, inputsize[1] /64};
const int yolo3[2] = {inputsize[0] / 32, inputsize[1] /32};
const int yolo4[2] = {inputsize[0] / 16, inputsize[1] /16};
const int yolo5[2] = {inputsize[0] / 8, inputsize[1] /8};

const int yolo1_num = getArraylen(yolo1);
const int yolo2_num = getArraylen(yolo2);
const int yolo3_num = getArraylen(yolo3);
// if yolo model have 4 output(like yolov4-p6) add const int yolo4_num = getArraylen(yolo4);
const int yolo4_num = getArraylen(yolo4);
const int yolo5_num = getArraylen(yolo5);

const int yolo1_size = std::accumulate(yolo1,yolo1+yolo1_num,1,std::multiplies<int64_t>());
const int yolo2_size = std::accumulate(yolo2,yolo2+yolo2_num,1,std::multiplies<int64_t>());
const int yolo3_size = std::accumulate(yolo3,yolo3+yolo3_num,1,std::multiplies<int64_t>());
// if yolo model have 4 output(like yolov4-p6) add const int yolo4_size = std::accumulate(yolo4,yolo4+yolo4_num,1,std::multiplies<int64_t>());
const int yolo4_size = std::accumulate(yolo4,yolo4+yolo4_num,1,std::multiplies<int64_t>());
const int yolo5_size = std::accumulate(yolo5,yolo5+yolo5_num,1,std::multiplies<int64_t>());

//const int yolo_size = num_anchors*(yolo1_size+yolo2_size+yolo3_size);
// if yolo model have 4 output(like yolov4-p6) change as const int yolo_size = num_anchors*(yolo1_size+yolo2_size+yolo3_size+yolo4_size);
const int yolo_size = num_anchors*(yolo1_size+yolo2_size+yolo3_size+yolo4_size+yolo5_size);

const std::string input_name = "input";
const std::string output_names[3] = {"conf","cls","bbox"};

//如果改变类别,在这里更改
const std::string class_names[80] = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
               "train", "truck", "boat", "traffic_light", "fire_hydrant",
               "stop_sign", "parking_meter", "bench", "bird", "cat", "dog",
               "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe",
               "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
               "skis", "snowboard", "sports_ball", "kite", "baseball_bat",
               "baseball_glove", "skateboard", "surfboard", "tennis_racket",
               "bottle", "wine_glass", "cup", "fork", "knife", "spoon", "bowl",
               "banana", "apple", "sandwich", "orange", "broccoli", "carrot",
               "hot_dog", "pizza", "donut", "cake", "chair", "couch",
               "potted_plant", "bed", "dining_table", "toilet", "tv", "laptop",
               "mouse", "remote", "keyboard", "cell_phone", "microwave",
               "oven", "toaster", "sink", "refrigerator", "book", "clock",
               "vase", "scissors", "teddy_bear", "hair_drier", "toothbrush"};

please see it, is it correct? i use author origin yolov4-p7 model

but get in inference process, the output is not correct, like below IMAGE, I don't know where the problem happen result

talebolano commented 3 years ago

@brucelee78 i see... When you generate the onnx file, did you specify the length and width of the input, like: python3 export.py --h 1280 --w 1280 If the input of the onnx model is 1280x1280,maybe change config.h: const int inputsize[2] = {1280,1280};

brucelee78 commented 3 years ago

@brucelee78 i see... When you generate the onnx file, did you specify the length and width of the input, like: python3 export.py --h 1280 --w 1280 If the input of the onnx model is 1280x1280,maybe change config.h: const int inputsize[2] = {1280,1280};

OK ! I see, it works great ! Thank you! the last question: I use the “yolov4-P7”model,the num_anchors in config.h should be 4 or 5?

talebolano commented 3 years ago

@brucelee78 num_anchors should be 4 in yolov4-P5、P6、P7 ,be 3 in yolov4-csp