AljazBozic / DeepDeform

[CVPR'2020] DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data
184 stars 17 forks source link

Dense correspondence between point clouds? #6

Closed Sentient07 closed 1 year ago

Sentient07 commented 2 years ago

Hello,

Thanks for this amazing work. I was wondering whether there is dense ground truth information available between two point clouds? I see that there is a per-pixel displacement from optical flow. Is it possible to use this for establishing dense point-point map between two point clouds? Thank you!

AljazBozic commented 2 years ago

Hi,

I'm sorry for a very late reply. There is actually also scene flow available (in same format as optical flow), with a 3D flow field for every pixel. It can be combined with the provided depth maps to establish dense 3D correspondences. You would use the depth map and corresponding camera parameters to backproject the pixels into 3D points in source frame. Afterwards, the 3D scene flow can be applied to transform the points to the target frame.

Best, Aljaz