kwea123 / nerf_pl

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning
https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV
MIT License
2.74k stars 483 forks source link

About the editing part #84

Closed jason718 closed 3 years ago

jason718 commented 3 years ago

Can you give more details about how to achieve the editing operations?

kwea123 commented 3 years ago

I believe you can find some details in the recent plenoctree project. I believe we're using the same concept although I don't know how he implements it.

I will briefly describe my implementation, but I cannot share the code.

  1. For object removal, define custom bounding boxes (specify the bounds) and set the sigma to zero inside (as post-processing). The space will then be transparent.
  2. For mesh insertion, I use ray-tracing. For each pixel we have a ray passing through, compute the color, alpha and the (first) intersection point of that ray w.r.t the mesh. For details of ray tracing you can look at my implementation. After calculating the color, alpha and the intersection, pass that information to the volume rendering pipeline, and treat it as an additional sample point.
kwea123 commented 3 years ago

You can also try https://github.com/DLR-RM/BlenderProc if you don't want to implement ray-tracing (it takes lot of time to learn and implement correctly, to be honest). This gives you directly the color, alpha and the depth of a scene (can contain multiple mesh). You only need to make sure the coordinate is the same.