grimoire / mmdetection-to-tensorrt

convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
Apache License 2.0
586 stars 85 forks source link

Multiple batch for only one inference #8

Open cefengxu opened 4 years ago

cefengxu commented 4 years ago

how to process more than one images ( for example 2 or batch_size == 2 ) on one inference when I using mmdetection-to-tensorrt ?

grimoire commented 4 years ago

So said, this repo do not support batched input for now. Batch support are on the top of my ToDoList. Will be added soon.

sunpeng981712364 commented 4 years ago

So said, this repo do not support batched input for now. Batch support are on the top of my ToDoList. Will be added soon.

Great works!!

grimoire commented 4 years ago

Hi, @cefengxu I have update the code (all three repo). Batch input support has be added on some models (tested on faster rcnn, double head rcnn, cascade rcnn, retinanet etc). Just set the opt_shape_param like follow:

opt_shape_param=[
    [
        [1,3,320,320],      # min shape
        [2,3,800,1344],     # optimize shape
        [4,3,1344,1344],    # max shape
    ]
]

As long as opt_shape_param[0][2][0]==4, it should give you batch size up to 4. Not all model support batch input now. It takes times. Still working on it.

@sunpeng981712364 Thank you. So glad to hear that.

cefengxu commented 4 years ago

Cool ... i will try it ASAP ~!