DepthAnything / Depth-Anything-V2

Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
https://depth-anything-v2.github.io
Apache License 2.0
3.39k stars 277 forks source link

question about inference time #7

Closed Linkersem closed 3 months ago

Linkersem commented 3 months ago

Hi, thx for your greate work! I have test the inference time between depthanyting v1 and v2 ,I found that v2 is about twice as slow as v1, is this reasonable?

pinnintipraneethkumar commented 3 months ago

Hi, Could you please give more details because , i have tried both v1 and v2 the speed of v2 is faster comapred to v1 uisng openvino and also pytorch model.

Linkersem commented 3 months ago

hi, I tested the same data on the RTX4090 with the small model, where v2 takes almost twice as long as v1, and I found that the main time is in the stage where the depth value of inference is copied from the gpu to the cpu and the function parameters are passed back, I modified the code and now the two take the same time, but not faster than v1.

LiheYoung commented 3 months ago

Thank you for discussions. Our V2 shares the same model architecture with V1. So the inference time should be the same.