Open STUDYHARD2113 opened 1 week ago
Hi, the evaluation code is provided in dav/utils/eval_utils.py
.
Thank you very much for your quick reply! I found the following scale alignment code in dav/utils/eval_utils.py, since there is no example of it being used, I'm guessing it should be that it needs to be scale aligned to gt for every graph, so how should this ensure that the whole video has depth consistency? I would like to be able to consistently test for depth consistency over a very long video sequence (>1000 frames), and I see that your tests in the paper use up to 32 frames. Do you have any suggestions on how to do this? For example, should I group the images before dropping them into the network? Looking forward to your reply Thanks again
if eval_cfg.fit_scale_shift: A = np.concatenate([pred_masked, np.ones_like(pred_masked)], axis=-1) X = np.linalg.lstsq( A, 1 / np.clip(gt_masked, a_min=1e-6, a_max=None), rcond=None )[0]
Thank u very much! I'm also confused about how to obtain the 'gt' and 'pred' values in the function abs_rel(gt, pred).
Thank you so much for your awesome work! The paper mentions being able to generate consistent scale depth estimates for video, but the depths I generate for a single frame using the code you provided do not appears to be scale-consistent. Is there any additional code you are using to ensure depth consistency across consecutive frames? When will this part of the code be published please? Or can you please provide the code you used to test the metric, this I think should also solve my problem. Looking forward to your reply!