TQTQliu / GeFu

[CVPR 2024] Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields
https://gefucvpr24.github.io/
MIT License
16 stars 1 forks source link

example for extracting mesh or point cloud #1

Closed yotofu closed 1 month ago

yotofu commented 1 month ago

Really a great job! Any plan to give a example for extracting mesh or point cloud?

TQTQliu commented 1 month ago

Thanks for your attention, I just add the code to recontruct point clouds. Add the save_ply True parameter to save the reconstruted point clouds. Take the DTU as an example, just run:

python run.py --type evaluate --cfg_file configs/gefu/dtu_pretrain.yaml gefu.eval_depth True save_ply True

Since the model can produce depth maps, the pixels in the image can be unprojected into 3D space to obtain point clouds. However, it is worth noting that in the code just provided, we did not perform the depth filtering operation, so the resulting point cloud will be large in number and contain some noise. If a more accurate point cloud is needed, masks can be generated by checking the geometric consistency of depth maps. Refer to the fusion method pcd or dypcd.

yotofu commented 1 month ago

Thanks! I found config below image is there any method to extract mesh for gefu?

TQTQliu commented 1 month ago

This is the configuration code inherited from other repo. We didn't use it. Because in our method, the novel view synthesis does not require mesh extraction. However, since the model can produce the depth map and point clouds, it can also be converted into corresponding mesh if you want.

TQTQliu commented 1 month ago

Hi, we provide a case on point cloud to mesh conversion here, hoping it will be helpful to you. However, it's worth noting that our method does not focus on surface reconstruction but rather on the quality of novel view synthesis. In contrast, methods like NeuS focus on surface reconstruction and can produce good mesh.

yotofu commented 1 month ago

Got it, thanks a lot!