Open zzzzzyh111 opened 3 months ago
Previously I have never seen tensorrt runs slower than onnx.
Thanks for your prompt reply! Am I correct in understanding that if nothing goes wrong during the conversion of onnx files to trt files, the acceleration should theoretically be achieved?
Yes. will you try tensorrt cpp version?
Since I'm unfamiliar with C++, I'm currently focusing on the Python version and using your work as a reference. If our previous discussion is correct, then the data loading and preprocessing process in my script might consume most of the time. I will continue investigating to find out the cause. If everything still seems to be good but it's not working, I will attempt the C++ version and let you know.
Thank you again for your prompt reply!
Was it solved.
Thank you for your excellent work! :satisfied: :satisfied: :satisfied:
Recently, I have been trying to use TensorRT to accelerate Depth Anything on Jetson Orin NX. However, I found that the inference speed of the converted TRT file does not significantly improve compared to the ONNX file, and it even decreases. Specifically:
The library versions are as follows:
The function to convert the .pth file to an ONNX file is as follows:
The function to convert the ONNX file to a TRT file is as follows:
The function to perform inference using the TRT file is as follows:
The code runs without any issues, except for some warnings during the ONNX conversion. However, the final results are still not satisfactory. Looking forward to your response! :heart: :heart: :heart: