xxlong0 / SparseNeuS

SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse views
MIT License
319 stars 16 forks source link

what do I need to do if I need to make the network only predict the model? #14

Open MNILjj opened 1 year ago

MNILjj commented 1 year ago

Dear author,

Thanks for your excellent work! I want to try to get model without the background,I downloaded your training set and remove the background. No matter whether "use_white_bkgd" is set to "True" or "False", the obtained model still has background. So I need to modify some parameters in the "general_lod0.conf" file?

The context I'm talking about is the part outside the predictive model, as shown in the image below: 2022-09-01 14-37-46 的屏幕截图

The parameter I modified is this place: 2022-09-01 14-48-32 的屏幕截图

This is the training image and the predicted model after I removed the background. I have tried both white and black backgrounds, and the effect is not very good. rect_001_0_r5000 rect_001_0_r5000 2022-09-01 15-00-13 的屏幕截图 2022-09-01 15-02-36 的屏幕截图

In order to quickly verify my ideas, currently I only train scan7 on one data. Excuse me, what do I need to do if I need to make the network only predict the model?thanks.

flamehaze1115 commented 1 year ago

Hello. The background has surfaces is a common problem for neural rendering-based methods, including neus or volsdf. This is because only images are utilized as supervision. For texture-less background, any surface predictions will be acceptable, since the regions will have very small errors in the rendering loss. This is why we propose a consistence-aware FT to enhance the predicted model by generic model and remove the free surfaces of background parts.

Modifying this parameter will only influence the color of the synthesized images. image

If you want to make the generic model to predict clean surfaces of the target object, you have to preprocess the input images to mask out its background, and enforce the background part to be empty in the training.

In the figures of paper, we just use a smaller bounding box to cut off the predicted mesh of the generic model to only keep the foreground part. If you train the generic model on a larger dataset not just one scene, the surfaces of the foreground and background will be separate like the first figure you plotted in this issue.

MNILjj commented 1 year ago

Thank you for your reply! I will try it.

MNILjj commented 1 year ago

Thank you for your reply! I will try it.

zjhthu commented 1 year ago

I find using occupancy_mask when extracting mesh can remove the background. More specifically, I uncomment this line. But I'm not sure whether this will hurt the final mesh accuracy if the initial occupancy mask is not good enough. @flamehaze1115