DepthAnything / Depth-Anything-V2

[NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
https://depth-anything-v2.github.io
Apache License 2.0
3.86k stars 336 forks source link

Why is the default epoch much larger than V1? #130

Open 1171000410 opened 3 months ago

1171000410 commented 3 months ago

Hello, I previously tried fine-tuning depthanything-V1 on the Kitti dataset using the default 5 epochs and achieved good results. Why does V2 default to 120 epochs, which makes the fine-tuning process much slower.

Regards.

LiheYoung commented 3 months ago

When trained on NYU-D or KITTI, our V2 also follows ZoeDepth to fine-tune for only 5 epochs. But when trained on synthetic datasets Hypersim or Virtual KITTI, we found that fine-tuning for more epochs can produce much more fine-grained results.