cvlab-stonybrook / s-volsdf

Official implementation of "S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit Surfaces" (ICCV 2023)
https://hao-yu-wu.github.io/s-volsdf/
MIT License
29 stars 1 forks source link

The actual training speed. #3

Closed chensjtu closed 7 months ago

chensjtu commented 9 months ago

Dear authors, congratulations on this exciting paper accepted by ICCV2023! However, when I train the s-volsdf on RTX3090, it cost "Train: 100009it [2:35:15, 10.74it/s]" one scene, which is unacceptable for only 3 images. In the context, you claim that "Moreover, neural surface optimization only requires 10-15 minutes in current hardware to obtain good results because of strong geometry cues from MVS." I wonder the MVS based method's speed. Since the "better MVS performance at a faster speed." is one of the s-volsdf main contributions.

hao-yu-wu commented 8 months ago

Hi, in the paper, we train the same number of iterations as VolSDF for a fair comparison. But we also found that training only 10,000 iterations (about 15 minutes) is enough to obtain good geometry (the chamfer distance is close to the result of 100,000 iterations). You can add opt_stepNs=[10000,0,0] to try it out. For example: python runner.py gpu=0 testlist=scan106 outdir=exps_mvs exps_folder=exps_vsdf opt_stepNs=[10000,0,0]