Open ghost opened 3 years ago
@vvvsice Good question.
It's true that depths around the points are not same. The key idea is to compare patch similarity between different frames. You can also implement this idea in different manner. For example, first project 3D points to different frames and crop a patch around the projected coordinate then calculate the loss. However, this implementation ignore camera rotation while assuming same depth accounts for camera rotation.
I see, thanks a lot.
@vvvsice Good question.
It's true that depths around the points are not same. The key idea is to compare patch similarity between different frames. You can also implement this idea in different manner. For example, first project 3D points to different frames and crop a patch around the projected coordinate then calculate the loss. However, this implementation ignore camera rotation while assuming same depth accounts for camera rotation.
Hi,thanks for sharing this excellent work. Could you please explain "this implementation ignore camera rotation while assuming same depth accounts for camera rotation" in more detail? I'm still confused about this. Much appreciate.
@belkahorry For the first one, you always crop a local patch around the projected point. The patch is a regular grid and not dependent on the camera rotation. For the second one, we project a patch (instead of point) with same depth to the other image and the projected patch is not a regular grid anymore and thus depends on camera rotation.
It's clear,thanks!
Hi, I'm still a bit confused as to why the regular grid would have a connection to the camera rotation.
It seems that the keypoints extracted by DSO mainly distribute around the edge of objects, thus the depth variance may be large, so I'm wondering if the same depth assumption is plausible, could you please share the idea of this implementation?