Open k0beLeenders opened 3 years ago
Hi @KobeLeenders,
With the blender_config.txt
provided in the repo (reproduced below), training a NeRF uses 8.9GB of GPU RAM.
Usage is at 4.7GB at first, but after a few iterations it jumps to 8.9GB (maybe after doing the first preview re-rendering).
expname = blender_paper_lego
basedir = ./logs
datadir = ./data/nerf_synthetic/lego
dataset_type = blender
no_batching = True
use_viewdirs = True
white_bkgd = True
lrate_decay = 500
N_samples = 64
N_importance = 128
N_rand = 1024
The model is resolution-agnostic, so gpu memory shouldn't be an issue. You can reduce the number of rays processed per batch and the end result should stay the same, but it's slower. Look at tiny_nerf.ipynb, it is the chunk parameter in batchify(fn, chunk).
I want to experiment a little bit with NeRF but when I use my gtx970 I get the resource exhaust error (understandable). I can get a gtx1070 but I'm not sure if this will suffice.
Any idea what would be the minimum requirements for training these models at a relatively fast speed? How much video memory do you need at least?