Closed zjulabwjt closed 1 year ago
Hi, I will add it to the final version if needed. During reimplementation, I found smooth term only has a minor effect. Thus I didn't include it in this version.
Best, Heng
The fake depth loss provides an initialization on depth to prevent the optimization trap into a singular solution or local minima. It is only used in the first 150 iterations of the initialization. If you have another initialization method, feel free to replace the fake depth loss with it, and see the effectiveness.
Let me know if you still have any questions.
Best, Heng
Let me know if you still have any questions.
Best, Heng
Thanks for your reply!And when will you open source the fully code?
Maybe ~June, depending on my workload.
Best
I am confused about the depth smooth loss.I found in your now code use the fake_depth_loss
fake_depth_loss = torch.nn.functional.smooth_l1_loss( depth, torch.ones_like(depth) * 1.5, beta=0.1, reduction="sum")
what it means in the code.Besides,I can't find the smooth loss in the paper.