spacewalk01 / depth-anything-tensorrt

TensorRT implementation of Depth-Anything V1, V2
https://depth-anything.github.io/
MIT License
227 stars 28 forks source link

Unable to create engine from onnx model #26

Closed xa1on closed 2 months ago

xa1on commented 2 months ago

I've prepared the baseline onnx model, however, whenever I try and run depth-anything-tensorrt.exe, it outputs:

Loading model from models/depth_anything_vitb14.onnx...

and then, the program exits without any errors at all.

This is the command I'm using:

build/Release/depth-anything-tensorrt.exe models/depth_anything_vitb14.onnx video/davis_dolphins.mp4

I'm assuming that I am exporting the onnx model properly, however, just in case, I'll upload a log. Here's the command I used

python export.py --encoder vitb --load_from depth_anything_vitb14.pth --image_shape 3 518 518

onnxmodelexportlog.txt

Anyone have any idea why this isn't working for me? Thanks.

CUDA: 11.6 TensortRT: 8.6.1.6 Windows: 11

spacewalk01 commented 2 months ago

Did you put opencv_worldxxx.dll file in build/Release folder?

spacewalk01 commented 2 months ago

image

spacewalk01 commented 2 months ago

Also add video io dll files image

spacewalk01 commented 2 months ago

Let me know if it works. Please give a star if you like the project and my help.

xa1on commented 2 months ago

I've already starred the project, however, even with the DLL files, the same issue is still present. I'm still not getting any errors and no engine file is being built.

xa1on commented 2 months ago

Let me know if it works. Please give a star if you like the project and my help.

image

xa1on commented 2 months ago

I've also been trying to build the engine file with trtexec with an older version of this repo, however, I'm still running into issues. I'm following this guide, however, I keep running into a "network creation failed" error in trtexec. My issue is outlined here with all the relevant files

xa1on commented 2 months ago

Let me know if it works. Please give a star if you like the project and my help.

thank you for taking the time to read my issue, your work looks amazing, however, I hope you're still able to help me with this :D

spacewalk01 commented 2 months ago

did you modify CMakeLists.txt to set your opencv and tensorrt paths? image

spacewalk01 commented 2 months ago

Please just use the most recent version and let's try to fix your issue.

spacewalk01 commented 2 months ago

I checked your onnxmodelexportlog.txt file. It seems your engine is correctly built.

xa1on commented 2 months ago

did you modify CMakeLists.txt to set your opencv and tensorrt paths? image

I have modified that file to the correct paths: image

spacewalk01 commented 2 months ago

That is very strange. It should show an error if there is a bug or problem in the code. If it doesn't show it may be system error

spacewalk01 commented 2 months ago

Or premission. you tensorrt and opencv are in c folder will you try it as admin?

xa1on commented 2 months ago

I'm sorry I can't provide any further information, but yeah, it still doesn't work. image

spacewalk01 commented 2 months ago

What is your gpu?

xa1on commented 2 months ago

I have a mobile NVIDIA GTX 1060 3gb vram image

spacewalk01 commented 2 months ago

I used your onnx model you put here and created the engine successfully. I think there is a cuda version problem in your system. maybe your cuda 11.6 version is not suitable for your GPU. I will look it up for you. image

xa1on commented 2 months ago

I have been able to successfully run the onnx model with cuda acceleration from this repo. I don't think that compatibility with the cuda version is the issue. It could be possible that my gpu just does not support tensorrt

spacewalk01 commented 2 months ago

I found out that Compute capability (version) for your GPU (nvidia gtx 1060) is 6.1

spacewalk01 commented 2 months ago

Also, I checked that compute capability 6.1 pascal arch is compitable with CUDA 8.0 image

spacewalk01 commented 2 months ago

Please install cuda 8.0. Also don't forget to install cudnn version for cuda8.0

xa1on commented 2 months ago

image

nvidia-smi states that my current cuda version is 12.4 which should allow me to use cuda version 11.6. I have been able to use version 11.6 with this repo too.

Let me try it out though.

spacewalk01 commented 2 months ago

Then check your onnx cuda version what version it is installed with?

xa1on commented 2 months ago

could you explain what you mean by onnx cuda version? I was able to use version 11.6 with this repo.

spacewalk01 commented 2 months ago

Mine says I am using cuda12.0 but in fact I installed 11.8. It doesn't show cuda toolkit version image

xa1on commented 2 months ago

yes, but it should indicate that my current driver is using cuda version 12.4. I have installed the latest Nvidia driver for my gpu.

spacewalk01 commented 2 months ago

You used pip install onnxruntime-gpu command to install onnx gpu right? use conda list to check onnxruntime-gpu version. Then you can check here which onnxruntime version is installed with which cuda version. https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html it automatically install cuda toolkit

xa1on commented 2 months ago

image image image I should have version 11.6 installed properly, I will check with onnxruntime-gpu in a second

spacewalk01 commented 2 months ago

for my case it automatically install onnxruntime version that suitable for my gpu which is suitable for cuda 11.8 image

spacewalk01 commented 2 months ago

onnxruntime is different it will use anaconda environment to install cuda. I suggest you to install cuda 8.0 and do the steps again.

xa1on commented 2 months ago

I will try that, but again, I have been able to use my version of the onnxruntime with my current version of cuda with cuda acceleration just fine in the past with the exact same model through this repo.

spacewalk01 commented 2 months ago

Hmm sorry that is what I can suggest from my observation. I can't guess without looking at it

xa1on commented 2 months ago

I am looking at the onnx website you sent me and it seems to be suggesting that only cuda versions 10 and up are supported.

xa1on commented 2 months ago

I'm kind of confused because I've been able to use the onnx runtime with cuda enabled before

xa1on commented 2 months ago

Hmm sorry that is what I can suggest from my observation. I can't guess without looking at it

I've completely redownloaded my cuda and tensorrt. Now I'm getting an error stating:


onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
4: [network.cpp::nvinfer1::Network::validate::3163] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)```
xa1on commented 2 months ago

I've figured it out. It was either my cuda cudnn or tensorrt installation that was the problem. I reinstalled everything and it works now.

Thanks for all your help!

spacewalk01 commented 2 months ago

It works on 11.6 cuda version with nvidia gtx 1060 ?

xa1on commented 2 months ago

It works on 11.6 cuda version with nvidia gtx 1060 ?

I reinstalled everything and used cuda 11.8 for my gpu. I believe the problem was the cudnn installation.

spacewalk01 commented 2 months ago

I see.