googleinterns / IBRNet

Apache License 2.0
486 stars 52 forks source link

Obtaining Mesh through marching cubes as in NeRF #2

Closed november07 closed 2 years ago

november07 commented 3 years ago

Hello! thanks for the great work! I was wondering how we could obtain a 3D mesh model as in NeRF using IBRNet! The input to the model is a source view's viewing directions, and I am unsure how we could retrieve the sigma value of a specific x,y,z location!

thank you

qianqianwang68 commented 2 years ago

I don't think IBRNet is the right method to obtain 3D meshes. Unlike NeRF, IBRNet is local and view dependent, i.e., to synthesize a target image it uses nearby images, and when the target view changes, the input view changes too. So there is not a globally consistent geometry representation as in NeRF. In some sense it is just like multi-view stereo (MVS) methods, so it is possible that you could obtain a bunch of depth images and fuse them as in MVS. But I don't think it will work better than real MVS methods which have been trained using GT geometry (whereas IBRNet only trains on image colors). I've looked at the depth map generated by IBRNet too. Without per-scene training, it doesn't look very accurate due to the same reason.