Closed darthgera123 closed 3 years ago
The training inputs are stored in https://github.com/kwea123/nerf_pl/blob/19a290103fd8df211a85a150daff861b53d59942/datasets/llff.py#L244-L245
So you can save these tensors in files and read them without the need to recompute them in each training/on different GPUs. However, this should not be a problem making you unable to train, caching only reduces data I/O so the preprocessing runs faster.
Hey there is no downscale or cache option in dev branch due to which I am unable to train the network. Could you guide how to add that please @kwea123