Open Egorundel opened 1 month ago
Can you provide more information about why you are using EfficientNMS_TRT layer? It appears to be a plugin so it will not necessarily map 1:1 to nvinfer1::LayerType::kNMS
. If you want to create an nvinfer1::LayerType::kNMS layer instead, you may look into replacing this plugin.
@akhilg-nv I am using this export-det.py
python file to create an ONNX model with a dynamic batch size:
https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/export-det.py
@akhilg-nv type of this layer is pluginV2
@samurdhikaru - could you advise on this one , i.e. if such casting is somehow doable in terms of plugins?
@Egorundel EfficientNMS_TRT
refers to a TensorRT plugin layer (specifically efficientNMSPlugin), whereas INMSLayer
refers to a layer of the inbuilt NMS operator in TensorRT. While the plugin and the inbuilt NMS op are similar in functionality, casting an IPluginV2Layer
to an INMSLayer
is not possible.
You have a couple of options:
EfficientNMS_TRT
node in the ONNX graph with a standard compliant ONNX NonMaxSuppression
node. Then parsing the ONNX graph with the TRT ONNX parser should yield a TRT network containing INMSLayer
s.NonMaxSuppression
node instead of EfficientNMS_TRT
plugin nodes. It appears that you will need to modify https://github.com/triple-Mu/YOLOv8-TensorRT/blob/c1e76b6dd5d58398939402c15b3c23052802523a/models/common.py#L27 and https://github.com/triple-Mu/YOLOv8-TensorRT/blob/c1e76b6dd5d58398939402c15b3c23052802523a/models/api.py#L150.
Description
Greetings to all, you can tell me I have an EfficientNMS_TRT layer in the ONNX model, but its type is not
nvinfer1::LayerType::kNMS
.check the screenshot:
If I try to bring this layer to
dynamic_cast<nvinfer1::INMSLayer>*
, then I still don't have the attributes that are inherent in INMSLayer.How can I get the layer attributes?
Environment
TensorRT Version: 8.6.1.6 NVIDIA GPU: RTX3060 NVIDIA Driver Version: 555.42.02 CUDA Version: 11.1