Closed anonymous-pusher closed 1 year ago
Hi! I think the ratio 0.5 is too high for these 360 scenes, maybe you should try 0.1 or 0.2. The mesh not being correctly exported could result from (1) the scene does not have solid geometry like in your case; (2) you may need to tweak the isosurface threshold in order to get good results.
Hi, thanks for the reply. Actually, this is a problem I'm encountering even with the dtu dataset. In the neus paper, they show that it can reach good performance for novel view synthesis task but here it is not the case. it is even more problematic without a mask and when using the background model. One thing I found that somehow improve results is commenting the line comp_rgb = comp_rgb + self.background_color * (1.0 - opacity)
. I'm not sure why it helps though. any idea regarding this ?
I encounter the same issue. The background model is problematic within the NeuS framework.
Hello and thank you for your great work.
I tried running nerf on a scene from the unbounded 360 dataset. I use the nerf-colmap yaml as config file. In the readme, you mention that testing does nothing but compare to white images. Therefore, the model would simply overfit to all images provided. I changed the dataloader to evaluate novel views with the model. The only change was selecting some images as training views and others for testing, with a ratio of 0.5.
The result is very bad for testing views in this case (the generated images are too blurry and lacks fine details) also the depth map is messed up. I don't think that the performance should deteriorate as much. Have you tested the background model for the task of novel view synthesis ?
I also noticed that you don't get an exported mesh even when overfitting to all views, I wonder if it's related since it is the density field that is used for marching cubes as far as I can tell.