jhaoshao / ChronoDepth

ChronoDepth: Learning Temporally Consistent Video Depth from Video Diffusion Priors
MIT License
194 stars 4 forks source link

Code for Evaluation #7

Open haodong2000 opened 5 months ago

haodong2000 commented 5 months ago

Awesome work! Congrats!

I am doing some follow-up works on video depth estimation, and I am wondering if you will release your code for evaluation? I will be deeply appreciate it!!!

Best,

jhaoshao commented 5 months ago

Thank you! The release of our evaluation code is indeed part of our plans. We invite you to stay tuned for our upcoming updates. :)

haodong2000 commented 3 months ago

By the way authors, I really want to use the KITTI-360 as an evaluation of the video depth estimation. Could you please tell me how to get the depth values cause the dataset seems not offer depth annotation directly?

Thanks so much!

jhaoshao commented 3 months ago

We projected the point cloud to pixel space to obtain the depth maps. For guidance on accomplishing this, it is advisable to consult the KITTI-360 repository at the following link: (https://github.com/autonomousvision/kitti360Scripts/blob/master/kitti360scripts/viewer/kitti360Viewer3DRaw.py).

Let me know if you have any questions!

haodong2000 commented 2 months ago

Dear authors, you said "We select several zero-shot video clips from KITTI-360 [44], MatrixCity [42] and ScanNet++ [74],".

Can you please tell me how you selected? Like the testing or validation split of those datasets?