DepthAnything / Depth-Anything-V2

Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
https://depth-anything-v2.github.io
Apache License 2.0
3.39k stars 277 forks source link

Enhancing Inference Speed by Utilizing More CUDA Cores for Batch Processing #62

Closed ietres closed 2 months ago

ietres commented 2 months ago

Is there a way to perform inference with batches using more CUDA cores and thus improve processing speed?

LiheYoung commented 2 months ago

Hi, you may need to adjust the pre-processing function by setting keep_aspect_ratio as False. This will make resize all images to the same pre-defined width and height to support batch inference. https://github.com/DepthAnything/Depth-Anything-V2/blob/31dc97708961675ce6b3a8d8ffa729170a4aa273/depth_anything_v2/dpt.py#L202