niladridutt / Diffusion-3D-Features

Diffusion 3D Features (Diff3F): Decorating Untextured Shapes with Distilled Semantic Features [CVPR 2024]
https://diff3f.github.io/
MIT License
31 stars 5 forks source link

Demo of point cloud input #5

Open 2019EPWL opened 2 weeks ago

2019EPWL commented 2 weeks ago

Hi, Thank you for sharing this work. Currently, the .ipynb file only provides a demo for mesh input. Could you advise on how to proceed when the input is a point cloud? I see from the render_point_cloud.py file that it only returns a depth image, which is different from mesh render.

2019EPWL commented 2 weeks ago

Hi, the mesh in the demo is artificial, very clean, and with thin slices. However, many meshes generated by implicit networks are double-layered and hollow in the middle. I found that the method from this paper does not perform well on these generated meshes. Since point clouds are more common in reality, I wish you could provide a simple demo for point clouds.

niladridutt commented 2 weeks ago

Hi @2019EPWL,

Thanks for bringing this up. I have not tested this on AI generated meshes but if you can link the examples you tested, I am keen to try it out. My first guess is that generated meshes have poor geometry which can yield imperfect depth and normal maps to what ControlNet has been trained on. However, good quality meshes such as from https://www.kaedim3d.com/ can definitely work well.

For point cloud data, we combine the rendered depth map with canny edge maps as illustrated in the supplementary of our paper (eq 13). Note that this only works well with dense point clouds. Will look into adding a demo, thanks for the idea!