apchenstu / TensoRF

[ECCV 2022] Tensorial Radiance Fields, a novel approach to model and reconstruct radiance fields
MIT License
1.17k stars 154 forks source link

How to build a real-time renderer with TensoRF? #7

Closed UPstartDeveloper closed 2 years ago

UPstartDeveloper commented 2 years ago

Hello @apchenstu,

I was wondering if it might be possible to render 3D models using TensoRF in real-time? I don't know if this is the direction your team was planning to go in with your paper. But I am curious if it would be possible to build something similar to what Google built for SNeRG: website link

Thanks for sharing this work with the community!

apchenstu commented 2 years ago

Hi, thanks for the recommendation, yeah, it is interesting to build a real-time renderer and it's not hard to achieve by combining with a customized CUDA kernel or OpenGL (SNeRG) I think. Unfortunately, the real-time renderer doesn't in our current plan, feel free to design one if you are interested in :)

UPstartDeveloper commented 2 years ago

If anyone else finds this problem interesting, I started a fork of this repo so we can work on it!

The approach I personally think we should take is:

  1. re-implementing all the torch modules using flax, and
  2. then see if we can drop a trained TensoRF straight into the baking pipeline (aka the bake.py script) that the SNeRG authors originally intended to be used with NeRF.
cduguet commented 2 years ago

Hi Zain! I find this problem interesting too! Thanks for sharing.

UPstartDeveloper commented 2 years ago

Hi again,

I'm curious if anyone has come across a technique or research paper on 1) reconstructing a 3d mesh from RGB-D scans, 2) in real-time, and 3) for a non-deformable object)?

Reason I ask is because I was able to render the predicted images by TensoRF for a custom dataset I have (here's an example):

019999_003

Since TensoRF visualizes both the RGB values and depth maps, I'm starting to wonder if we could just use these for the reconstruction?

UPstartDeveloper commented 2 years ago

^On 2nd thought - I guess using RGB-D would be overthinking the problem? I know earlier in the issue @apchenstu brought up the following idea:

Hi, thanks for the recommendation, yeah, it is interesting to build a real-time renderer and it's not hard to achieve by combining with a customized CUDA kernel or OpenGL (SNeRG) I think...

This is probably a dumb question - but could you please elaborate on this? To me what I think it suggests is that we could just pass a trained TensoRF to a custom fragment shader (let's say in OpenGL), and use that to predict the view-dependent colors in real-time?

UPstartDeveloper commented 2 years ago

Hi again - just to provide an update on this, I started another new repo called tensorfvis here, where we're building the renderer based on the nerfvis library (which itself is based on the PlenOctrees paper). I will open up new issues in the future there to continue this discussion.