ethnhe / FFB6D

[CVPR2021 Oral] FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation.
MIT License
295 stars 72 forks source link

how to get all the points of point cloud? #28

Closed minwang-ai closed 3 years ago

minwang-ai commented 3 years ago

Hi Yisheng,

You transformed each pixel on depth images to the same amount of points in the point cloud with camera intrinsic matrix in this way. How did you (FFB6D and PVNet3D as well as Densefusion) get the points of occlusion parts? Did you use 3D model matching?

Best, Min

ethnhe commented 3 years ago

Do you mean the occluded parts that are invisible from the camera? No, we didn't use 3D model matching. We only use the visible point cloud from the camera.

minwang-ai commented 3 years ago

你的意思是相机看不见的被遮挡部分?不,我们没有使用 3D 模型匹配。我们只使用来自相机的可见点云。

Thank

Do you mean the occluded parts that are invisible from the camera? No, we didn't use 3D model matching. We only use the visible point cloud from the camera.

Yes. Thank you for your reply! Do you mean only use 3D models of objects for render using raster_triangle?

ethnhe commented 3 years ago

We use the scene point cloud visible from the camera as inputs to FFB6D. For raster_triangle, we use the reconstructed object meshes from the dataset creators for synthesis scene rendering.