spacewalk01 / depth-anything-tensorrt

TensorRT implementation of Depth-Anything V1, V2
https://depth-anything.github.io/
MIT License
227 stars 28 forks source link

Fail to create engine file with metric_depth_vits model #23

Closed bhsud closed 3 months ago

bhsud commented 3 months ago

Hello! Thank you for your kind words! However, I've encountered a problem. I exported the depth_anything_metric_depth_outdoor_vits.onnx file and executed the following command to create an engine file.

./depth-anything-tensorrt depth_anything_metric_depth_outdoor_vits.onnx testvideo.mp4 

I met following error: Loading model from depth_anything_metric_depth_outdoor_vits.onnx... CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped /core/core/If_OutputLayer: IIfConditionalOutputLayer inputs must have the same shape. Shapes are [1,392,518] and [1,1,392,518]. ModelImporter.cpp:771: While parsing node number 924 [If -> "/core/core/If_output_0"]: ModelImporter.cpp:772: --- Begin node --- ModelImporter.cpp:773: input: "/core/core/Equal_1_output_0" output: "/core/core/If_output_0" name: "/core/core/If" op_type: "If" attribute { name: "then_branch" g { node { input: "/core/core/Relu_output_0" output: "/core/core/Squeeze_output_0" name: "/core/core/Squeeze" op_type: "Squeeze" attribute { name: "axes" ints: 1 type: INTS } doc_string: "" } name: "torch_jit1" output { name: "/core/core/Squeeze_output_0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 392 } dim { dim_value: 518 } } } } } } type: GRAPH } attribute { name: "else_branch" g { node { input: "/core/core/Relu_output_0" output: "/core/core/Identity_output_0" name: "/core/core/Identity" op_type: "Identity" doc_string: "" } name: "torch_jit2" output { name: "/core/core/Identity_output_0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 1 } dim { dim_value: 392 } dim { dim_value: 518 } } } } } } type: GRAPH } doc_string: "/home/hsu/Desktop/Depth-Anything/metric_depth/zoedepth/models/base_models/dpt_dinov2/dpt.py(158): forward\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/nn/modules/module.py(1182): _slow_forward\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/nn/modules/module.py(1194): _call_impl\n/home/hsu/Desktop/Depth-Anything/metric_depth/zoedepth/models/base_models/depth_anything.py(273): forward\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/nn/modules/module.py(1182): _slow_forward\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/nn/modules/module.py(1194): _call_impl\n/home/hsu/Desktop/Depth-Anything/metric_depth/zoedepth/models/zoedepth/zoedepth_v1.py(149): forward\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/nn/modules/module.py(1182): _slow_forward\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/nn/modules/module.py(1194): _call_impl\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/jit/_trace.py(118): wrapper\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/jit/_trace.py(127): forward\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/nn/modules/module.py(1194): _call_impl\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/jit/_trace.py(1184): _get_trace_graph\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/onnx/utils.py(891): _trace_and_get_graph_from_model\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/onnx/utils.py(987): _create_jit_graph\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/onnx/utils.py(1111): _model_to_graph\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/onnx/utils.py(1529): _export\n/home/hsu/anaconda3/envs/metric_depth/lib/python3.9/site-packages/torch/onnx/utils.py(504): export\n/home/hsu/Desktop/Depth-Anything/metric_depth/export_to_onnx.py(93): \n"

ModelImporter.cpp:774: --- End node --- ModelImporter.cpp:777: ERROR: ModelImporter.cpp:195 In function parseGraph: [6] Invalid Node - /core/core/If /core/core/If_OutputLayer: IIfConditionalOutputLayer inputs must have the same shape. Shapes are [1,392,518] and [1,1,392,518]. 4: [network.cpp::validate::2882] Error Code 4: Internal Error (Network must have at least one output) Segmentation fault (core dumped)

spacewalk01 commented 3 months ago

Hi, Please refer to model preparation part and do following steps before creating your onnx model. It will solve your problem:

Copy and paste dpt.py in this repo to /depth_anything folder. Then, copy export.py in this repo to .

spacewalk01 commented 3 months ago

🤖 Model Preparation

Perform the following steps to create an onnx model:

  1. Download the pretrained model and install Depth-Anything:

    git clone https://github.com/LiheYoung/Depth-Anything
    cd Depth-Anything
    pip install -r requirements.txt
  2. Copy and paste dpt.py in this repo to <depth_anything_installpath>/depth_anything folder. Then, copy export.py in this repo to <depth_anything_installpath>.

  3. Export the model to onnx format using export.py. You will get an onnx file named depth_anything_vit{}14.onnx, such as depth_anything_vitb14.onnx.

    python export.py --encoder vitb --load_from depth_anything_vitb14.pth --image_shape 3 518 518

    [!TIP] The width and height of the model input should be divisible by 14, the patch height.

bhsud commented 3 months ago

thank you for replying! yes! I'm able to inference relative-depth Depth Anything models with tensorrt by following the instructions. However, I encountered above problem when I try to get engine file from onnx of metric-depth Depth Anything model.

spacewalk01 commented 3 months ago

I see they added new models for metric-depth: https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_metric_depth I will try to add those thanks

bhsud commented 3 months ago

Thank you~

gopin95 commented 2 weeks ago

请问度量模型添加了吗