Closed d33dler closed 10 months ago
The author adapts the way using TensorRT API, of coruse, you need to be similar whit TensorRT. However,this method is not easy to DEBUG. I used a combination of onnx and TensorRT plugin to achieve this result。I splited the model to sub-model (dynamic-vfe, dsvt_input_layer,dsvt_block,scatter, 3Dbackbone) that running the TenosRT,and python and c++ result are the same.
python version
c++ version
Here, many thanks to the author for contributions to projece deployment!!
The author adapts the way using TensorRT API, of coruse, you need to be similar whit TensorRT. However,this method is not easy to DEBUG. I used a combination of onnx and TensorRT plugin to achieve this result。I splited the model to sub-model (dynamic-vfe, dsvt_input_layer,dsvt_block,scatter, 3Dbackbone) that running the TenosRT,and python and c++ result are the same.
python version
c++ version
Have you compared the reasoning results between this project and the PT model to see if they are consistent? Can we open source the corresponding PT model for this project? The results of the current retraining are inconsistent.
The results are consistent based on python or TensorRT environments. even if there are sight different, but effect is small.
The results are consistent based on python or TensorRT environments. even if there are sight different, but effect is small.
Is it your own model? I mean use jingyue202205/DSVT-AI-TRT/tools/dsvt_cbgs_dyn_pp_centerpoint.yaml to train own pt model,then get the dsvt-my.wts,then test the dsvt-my.wts as jingyue202205/DSVT-AI-TRT,finally the results between own pt and DSVT-AI-TRT of dsvt-my difference is significant。
What is the achieved speed-up with this deployment? Did you run the model without TRT and could show us the results? Thanks