jkjung-avt / tensorrt_demos

TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
https://jkjung-avt.github.io/
MIT License
1.74k stars 545 forks source link

Is there any way to avoid using the reshape operation after inference? #606

Open ysj-xuanyuan opened 8 months ago

ysj-xuanyuan commented 8 months ago

When I use do_inference_v2, I need the reshape operation to get the correct result, but I find that the reshape operation takes a lot of time, which defeats the purpose of using tensorRT to try to speed up inference in the first place