Open haodong2000 opened 5 months ago
Thank you! The release of our evaluation code is indeed part of our plans. We invite you to stay tuned for our upcoming updates. :)
By the way authors, I really want to use the KITTI-360 as an evaluation of the video depth estimation. Could you please tell me how to get the depth values cause the dataset seems not offer depth annotation directly?
Thanks so much!
We projected the point cloud to pixel space to obtain the depth maps. For guidance on accomplishing this, it is advisable to consult the KITTI-360 repository at the following link: (https://github.com/autonomousvision/kitti360Scripts/blob/master/kitti360scripts/viewer/kitti360Viewer3DRaw.py).
Let me know if you have any questions!
Dear authors, you said "We select several zero-shot video clips from KITTI-360 [44], MatrixCity [42] and ScanNet++ [74],".
Can you please tell me how you selected? Like the testing or validation split of those datasets?
Awesome work! Congrats!
I am doing some follow-up works on video depth estimation, and I am wondering if you will release your code for evaluation? I will be deeply appreciate it!!!
Best,