TrojanXu / onnxparser-trt-plugin-sample

A sample for onnxparser working with trt user defined plugins for TRT7.0
Apache License 2.0
166 stars 36 forks source link

Is plugin implementation on the TensorRT side necessary? #1

Closed enesmsahin closed 4 years ago

enesmsahin commented 4 years ago

Hi,

First, thank you for this repo.

I understand that we need to implement custom layer on the onnx-tensorrt side inside builtin_op_importers.cpp in order tensorrt onnx parser (nvonnxparser) to be able to parse the custom layer. Shouldn't this be enough in order to parse the onnx model file and create a TensorRT inference engine?

I see that you implemented TensorRT plugin and even CUDA implementations for the grid sample op. I thought TensorRT plugins were necessary in order to parse unsupported layers by the uff or caffe parsers (maybe onnx also). But by just implementing custom layer inside builtin_op_importers.cpp and rebuilding onnx-tensorrt, nvonnxparser should be able to parse the model, shouldn't it?

Thanks in advance.

ghost commented 4 years ago

Within the scope of DEFINE_BUILTIN_OP_IMPORTER(GridSampler)

    const auto mPluginRegistry = getPluginRegistry();
    const auto pluginCreator
        = mPluginRegistry->getPluginCreator(pluginName.c_str(), pluginVersion.c_str());

It is used to find plugin with name "pluginName" and version "pluginVersionin" in the registry.

TrojanXu commented 4 years ago

Hi,

First, thank you for this repo.

I understand that we need to implement custom layer on the onnx-tensorrt side inside builtin_op_importers.cpp in order tensorrt onnx parser (nvonnxparser) to be able to parse the custom layer. Shouldn't this be enough in order to parse the onnx model file and create a TensorRT inference engine?

I see that you implemented TensorRT plugin and even CUDA implementations for the grid sample op. I thought TensorRT plugins were necessary in order to parse unsupported layers by the uff or caffe parsers (maybe onnx also). But by just implementing custom layer inside builtin_op_importers.cpp and rebuilding onnx-tensorrt, nvonnxparser should be able to parse the model, shouldn't it?

Thanks in advance.

Just as michael-peng commented, in DEFINE_BUILTIN_OP_IMPORTER(GridSampler), we instruct onnxparser to parse onnx op to tesnorrt layers. If the op is conv, you just translate onnx conv to tesnorrt conv which has native mapping in tensorrt, i.e., IConvolutionLayer. But if the op is GridSampler here which is not directly mapped to any tensorrt layer combination, then, we have to implement a plugin for tensorrt and add the plugin into the network. What you implement in builtin_op_importers.cpp is not the custom layer itself but the mapping (from onnx op to tensorrt op).

John-Yao commented 3 years ago

@TrojanXu Hi, I am also confused about the DEFINE_BUILTIN_OP_IMPORTER.

I am new at TensorRT. I read some plugin examples/blog and tensorrt develop guide.

Many project referred out the way to register plugin by REGISTER_TENSORRT_PLUGIN or DEFINE_BUILTIN_OP_IMPORTER.

And I could not find how to register plugin in the repo.

Dose the plugin is registered by LD_PRELOAD?

By the way, Could you provide some suggestion about how to implement the plugin in TensorRT or some overal blog/tutorial ?

Thanks!