Open 1171000410 opened 3 months ago
When trained on NYU-D or KITTI, our V2 also follows ZoeDepth to fine-tune for only 5 epochs. But when trained on synthetic datasets Hypersim or Virtual KITTI, we found that fine-tuning for more epochs can produce much more fine-grained results.
Hello, I previously tried fine-tuning depthanything-V1 on the Kitti dataset using the default 5 epochs and achieved good results. Why does V2 default to 120 epochs, which makes the fine-tuning process much slower.
Regards.