qianqianwang68 / omnimotion

Apache License 2.0
2.07k stars 121 forks source link

Question about the depth consistency loss #43

Open Ramseyous0109 opened 8 months ago

Ramseyous0109 commented 8 months ago

Hi, thanks for this amazing work. I read the code in trainer.py and I find a depth consistency loss method is defined but not used in training.

    def compute_depth_consistency_loss(self, proj_depths, pred_depths, visibilities, normalize=True):
        '''
        :param proj_depths: [n_imgs, n_pts, 1]
        :param pred_depths: [n_imgs, n_pts, 1]
        :param visibilities: [n_imgs, n_pts, 1]
        :return: depth loss
        '''
        if normalize:
            mse_error = torch.mean((proj_depths - pred_depths) ** 2 * visibilities) / (torch.mean(visibilities) + 1e-6)
        else:
            mse_error = torch.mean((proj_depths - pred_depths) ** 2 * visibilities)
        return mse_error

It seems that you tried to supervise the model with the consistency between the depth value mapped from frame $i$ to frame $j$ and the depth value predicted in frame $j$. I wonder why you eventually abandon this loss in training.

In addition, do you think it's possible to add some depth supervision terms to produce more reliable depth information instead of using pseudo-depth and will this enhance the performance?

Looking forward to your reply!

qianqianwang68 commented 8 months ago

Hi, your understanding of the loss is correct. While it makes a lot of sense, we didn't use it because we found it doesn't improve the performance much. My intuition is that if we have a pair of cycle-consistent 2D correspondences A and B, by enforcing A maps to B and B maps to A, the depth loss will be somewhat automatically fulfilled. Because, let's assume that A already maps to B (with very peaky opacity, i.e., the first nonzero sample on A's ray has opacity=1 and flows to a 3D location XB that projects into B's location), then in order for B to map back to A, a sensible solution is just to make all samples whose depth is smaller than XB to be zero, so that B's corresponding 3D location will be XB which will be mapped back to A by the invertible network. So by enforcing the correspondences to be cycle consistent + the invertible network, the depth consistency is enforced as well.

Yes, adding depth supervision is possible, and we tried something similar too. But I'm not sure if this would improve tracking performance. But it may be that we didn't try hard enough. It seemed to me that by simply adding a scale/shift invariant depth supervision, there is tension between satisfying the flow loss vs the depth loss, which makes the optimization harder. One can also make major changes, e.g., adding depth supervision but also adding camera poses and using perspective projection (so that the static part can now really have zero deformation which will make the job of the invertible network easier), but this can be a bit too complicated. Exploring how to add depth to allow better handling of occlusion is definitely worth exploring

Ramseyous0109 commented 7 months ago

Thank you so much for your detailed response. I'm really grateful for it.

It seemed to me that by simply adding a scale/shift invariant depth supervision, there is tension between satisfying the flow loss vs the depth loss, which makes the optimization harder.

In fact, I already tried this simple way of adding scale/shift invariant depth supervision. The results prove the tension between flow and depth loss you mentioned. They cannot be optimized together in this way from my experiments.

One can also make major changes, e.g., adding depth supervision but also adding camera poses and using perspective projection

It also came to me the idea of adding camera pose and using perspective projection to supervise on static areas. I'll try this one though I haven't got any idea about how to take occlusion into consideration in this supervision now. I'll think about it carefully.

Thanks again and I'll appreciate it if we can keep in contact using e-mail when it's convenient for you.