Closed jongsik-moon closed 1 year ago
Hi @jongsik-moon, this question is best suited for https://github.com/NVIDIA/TensorRT which includes many samples showcasing how to make use of TensorRT inference outputs in your application.
For this ResNeXt model in particular (which was used in NVIDIA's MLPerf-Inference 3.0 submission), you can look at https://github.com/mlcommons/inference_results_v3.0/tree/d9f23eb124be29a02833f55f6d518e78e8f6433d/closed/NVIDIA/code/retinanet/tensorrt.
Hi guys, I followed https://github.com/NVIDIA/Deep-Learning-Accelerator-SW/blob/main/scripts/prepare_models/README.md and successfully exported a tensorrt engine of retinanet resnext .
The question is, how can I get bbox using the tensorrt engine? In the retinanet_resnext50.py file, nms is removed and the output format of the engine is as follows.
`
`
Can you give an extra inference code using the output to make bbox information?