grimoire / torch2trt_dynamic

A pytorch to tensorrt convert with dynamic shape support
MIT License
254 stars 34 forks source link

Why official torch2trt doesn't invoke any plugin, but this invoke too many plugins even with simple model #7

Open lucasjinreal opened 3 years ago

lucasjinreal commented 3 years ago

I tested mobilenetv2.

In official repo, it can generate without any plugin, this repo will have.

image image

I doesn't need dynamic in simple model, what's the differences between offical repo?

grimoire commented 3 years ago
Hi In these 'simple model' case, It is caused by adaptiveavgpool2d/adaptivemaxpool2d, Here is the difference official onnx this
static o o o
dynamic-output_size=1 x o o
dynamic-output_size!=1 x x o
w/o plugin o o x

official implement it by pooling with the stride and kernel_size, it is easy, but can not work on dynamic case.

onnx use reduce ops to do the pooling, all value along given dims are gather together as one value. It is cool, but fails when output_size!=1

I create this repo because sometimes we need to do downsampling with adaptivepool such as bfp

                gathered = F.adaptive_max_pool2d(
                    inputs[i], output_size=gather_size)

TRT did not have adaptivepool and I can not find a 'plan B' which does not need plugins.

Most other plugins are created for similar reasons. Either TRT does not have an implementation or TRT can not do dynamic conversion.

You can replace the AdaptiveAvgPool2d in your model to AvgPool2d if you does not need dynamic shape support.

Or simply use official one, I have change the name of this project yesterday night, You can install both of then without conflict(I did not tested, if something is wrong, please let me know).

lucasjinreal commented 3 years ago

@grimoire I will have a test

likegogogo commented 2 years ago

Hi In these 'simple model' case, It is caused by adaptiveavgpool2d/adaptivemaxpool2d, Here is the difference

official onnx this static o o o dynamic-output_size=1 x o o dynamic-output_size!=1 x x o w/o plugin o o x official implement it by pooling with the stride and kernel_size, it is easy, but can not work on dynamic case.

onnx use reduce ops to do the pooling, all value along given dims are gather together as one value. It is cool, but fails when output_size!=1

I create this repo because sometimes we need to do downsampling with adaptivepool such as bfp

                gathered = F.adaptive_max_pool2d(
                    inputs[i], output_size=gather_size)

TRT did not have adaptivepool and I can not find a 'plan B' which does not need plugins.

Most other plugins are created for similar reasons. Either TRT does not have an implementation or TRT can not do dynamic conversion.

You can replace the AdaptiveAvgPool2d in your model to AvgPool2d if you does not need dynamic shape support.

Or simply use official one, I have change the name of this project yesterday night, You can install both of then without conflict(I did not tested, if something is wrong, please let me know).

弱弱顺便问一下哈,what is w/o plugin? @grimoire

grimoire commented 2 years ago

@ideafold without plugin