Open XYQ0121 opened 3 weeks ago
Hi, did you correctly ignore the regions without groundtruth values during loss computation?
Hi, did you correctly ignore the regions without groundtruth values during loss computation?
It seems that the issue with the loss function occurred because I did not set the learning rate correctly during training. However, even when I correctly set the training parameters and used the same data to fine-tune both v1 and v2, v2 performed worse than v1. Could this be because I trained using sparse depth maps?
@XYQ0121 Hi, I met the same Nan loss problem, how did you solve this. I set LR as default 0.000005
Hello, I noticed that the fine-tuning of depth-anything-v1 was performed using the KITTI dataset (with sparse depth maps as ground truth). However, depth-anything-v2 was fine-tuned using Virtual KITTI 2 (with dense depth maps as ground truth). When I fine-tune depth-anything-v2 using my self-collected dataset (with sparse depth maps as ground truth), there are issues with the loss. Is there any good solution for this?![Snipaste_2024-06-26_10-47-39](https://github.com/DepthAnything/Depth-Anything-V2/assets/110807624/e447a059-8940-475d-ac95-7f5fd5a7ebaa)