Riga2 / TensoSDF

[SIGGRAPH 2024] TensoSDF: Roughness-aware Tensorial Representation for Robust Geometry and Material Reconstruction
MIT License
60 stars 3 forks source link

question about train process #5

Closed JiatengLiu closed 2 weeks ago

JiatengLiu commented 2 weeks ago

Hi, In #1 you showed your training process. The RAM is only about 6GB. Can you render the whole image in a single training iteration, Or just render a patch like traditional NeRF?

Riga2 commented 2 weeks ago

Hi, In #1 you showed your training process. The RAM is only about 6GB. Can you render the whole image in a single training iteration, Or just render a patch like traditional NeRF?

Hi, we rendered 1024 random pixels for each iteration (these pixels may from different images) during training. You can modify the "_train_raynum" in the "network/shapeRenderer.py" accoording to your GPU memory.

JiatengLiu commented 2 weeks ago

Thanks for your reply! I want to render a full image in a training iteration, have you tried how much VRAM is needed to do this? And do you have a better solution to this problem?

Riga2 commented 2 weeks ago

Thanks for your reply! I want to render a full image in a training iteration, have you tried how much VRAM is needed to do this? And do you have a better solution to this problem?

For a 800x800 image, it's a little bit hard to directly render the full resolution under the TensoSDF or other NeRF variants. Usually we split the image into several chunks/patches, then combining them into a full image, just like what we do during test. If I remember correctly, 4096 pixels per training iteration is the limit in our method, with the GPU 4090 (24 GB). Off course, you can also reduce the number of samples per ray, but may cause lower quality.

JiatengLiu commented 2 weeks ago

I understand. I will follow your advices later.