Closed xa1on closed 2 months ago
Did you put opencv_worldxxx.dll file in build/Release folder?
Also add video io dll files
Let me know if it works. Please give a star if you like the project and my help.
I've already starred the project, however, even with the DLL files, the same issue is still present. I'm still not getting any errors and no engine file is being built.
Let me know if it works. Please give a star if you like the project and my help.
I've also been trying to build the engine file with trtexec with an older version of this repo, however, I'm still running into issues. I'm following this guide, however, I keep running into a "network creation failed" error in trtexec. My issue is outlined here with all the relevant files
Let me know if it works. Please give a star if you like the project and my help.
thank you for taking the time to read my issue, your work looks amazing, however, I hope you're still able to help me with this :D
did you modify CMakeLists.txt to set your opencv and tensorrt paths?
Please just use the most recent version and let's try to fix your issue.
I checked your onnxmodelexportlog.txt file. It seems your engine is correctly built.
did you modify CMakeLists.txt to set your opencv and tensorrt paths?
I have modified that file to the correct paths:
That is very strange. It should show an error if there is a bug or problem in the code. If it doesn't show it may be system error
Or premission. you tensorrt and opencv are in c folder will you try it as admin?
I'm sorry I can't provide any further information, but yeah, it still doesn't work.
What is your gpu?
I have a mobile NVIDIA GTX 1060 3gb vram
I used your onnx model you put here and created the engine successfully. I think there is a cuda version problem in your system. maybe your cuda 11.6 version is not suitable for your GPU. I will look it up for you.
I have been able to successfully run the onnx model with cuda acceleration from this repo. I don't think that compatibility with the cuda version is the issue. It could be possible that my gpu just does not support tensorrt
I found out that Compute capability (version) for your GPU (nvidia gtx 1060) is 6.1
Also, I checked that compute capability 6.1 pascal arch is compitable with CUDA 8.0
Please install cuda 8.0. Also don't forget to install cudnn version for cuda8.0
nvidia-smi states that my current cuda version is 12.4 which should allow me to use cuda version 11.6. I have been able to use version 11.6 with this repo too.
Let me try it out though.
Then check your onnx cuda version what version it is installed with?
could you explain what you mean by onnx cuda version? I was able to use version 11.6 with this repo.
Mine says I am using cuda12.0 but in fact I installed 11.8. It doesn't show cuda toolkit version
yes, but it should indicate that my current driver is using cuda version 12.4. I have installed the latest Nvidia driver for my gpu.
You used pip install onnxruntime-gpu
command to install onnx gpu right? use conda list to check onnxruntime-gpu version. Then you can check here which onnxruntime version is installed with which cuda version. https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html it automatically install cuda toolkit
I should have version 11.6 installed properly, I will check with onnxruntime-gpu in a second
for my case it automatically install onnxruntime version that suitable for my gpu which is suitable for cuda 11.8
onnxruntime is different it will use anaconda environment to install cuda. I suggest you to install cuda 8.0 and do the steps again.
I will try that, but again, I have been able to use my version of the onnxruntime with my current version of cuda with cuda acceleration just fine in the past with the exact same model through this repo.
Hmm sorry that is what I can suggest from my observation. I can't guess without looking at it
I am looking at the onnx website you sent me and it seems to be suggesting that only cuda versions 10 and up are supported.
I'm kind of confused because I've been able to use the onnx runtime with cuda enabled before
Hmm sorry that is what I can suggest from my observation. I can't guess without looking at it
I've completely redownloaded my cuda and tensorrt. Now I'm getting an error stating:
onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
4: [network.cpp::nvinfer1::Network::validate::3163] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)```
I've figured it out. It was either my cuda cudnn or tensorrt installation that was the problem. I reinstalled everything and it works now.
Thanks for all your help!
It works on 11.6 cuda version with nvidia gtx 1060 ?
It works on 11.6 cuda version with nvidia gtx 1060 ?
I reinstalled everything and used cuda 11.8 for my gpu. I believe the problem was the cudnn installation.
I see.
I've prepared the baseline onnx model, however, whenever I try and run depth-anything-tensorrt.exe, it outputs:
and then, the program exits without any errors at all.
This is the command I'm using:
I'm assuming that I am exporting the onnx model properly, however, just in case, I'll upload a log. Here's the command I used
onnxmodelexportlog.txt
Anyone have any idea why this isn't working for me? Thanks.
CUDA: 11.6 TensortRT: 8.6.1.6 Windows: 11