Closed JiatengLiu closed 2 weeks ago
Hi, In #1 you showed your training process. The RAM is only about 6GB. Can you render the whole image in a single training iteration, Or just render a patch like traditional NeRF?
Hi, we rendered 1024 random pixels for each iteration (these pixels may from different images) during training. You can modify the "_train_raynum" in the "network/shapeRenderer.py" accoording to your GPU memory.
Thanks for your reply! I want to render a full image in a training iteration, have you tried how much VRAM is needed to do this? And do you have a better solution to this problem?
Thanks for your reply! I want to render a full image in a training iteration, have you tried how much VRAM is needed to do this? And do you have a better solution to this problem?
For a 800x800 image, it's a little bit hard to directly render the full resolution under the TensoSDF or other NeRF variants. Usually we split the image into several chunks/patches, then combining them into a full image, just like what we do during test. If I remember correctly, 4096 pixels per training iteration is the limit in our method, with the GPU 4090 (24 GB). Off course, you can also reduce the number of samples per ray, but may cause lower quality.
I understand. I will follow your advices later.
Hi, In #1 you showed your training process. The RAM is only about 6GB. Can you render the whole image in a single training iteration, Or just render a patch like traditional NeRF?