Thank you so much for the amazing work and also providing guidance on how to work with your amazing repo.
I would like to convert the code from adding noise to refined_depth_t to working with GT gt_depth_t.
I just wanted make sure that I am doing it correctly: I need to change the following:
I need to use ddim_loss_gt() - this applies noise on gt_depth_t to get the ddim loss.
(Not sure about this) Change CNNDDIMPipeline - each inference step takes the denoised image from the previous timestep and adds more noise based on the current timestep. Do I use the gt_depth_t at each timestep to run denoising model? Or keep this part as is? The output of the loop is refined_depth_t.
Thank you very much for your interest again. Really appreciate that.
For the first question, yes, you can use ddim_loss_gt(). But just please also double-check whether I write everything right as during experiments there might be some hard coded changes. Just make sure the noise is added to gt_depth_t. Not sure but directly adding noise to GT might involve more overfitting problems. Just a small notice.
For the pipeline I think keeping the part as it is should be better, this means assuming the GT depth as the denoising endpoint, but still not leaking too much information. But yeah, it's better to have an experiment about this. I'm not 100% sure.
I do think monotonic-nn is a better idea than GT, but yeah GT could be a baseline.
Dear author,
Thank you so much for the amazing work and also providing guidance on how to work with your amazing repo.
I would like to convert the code from adding noise to refined_depth_t to working with GT gt_depth_t.
I just wanted make sure that I am doing it correctly: I need to change the following:
Thank you so much in advance. Best regards.