bennyguo / instant-nsr-pl

Neural Surface reconstruction based on Instant-NGP. Efficient and customizable boilerplate for your research projects. Train NeuS in 10min!
MIT License
856 stars 84 forks source link

Saving GPU memory during pre-loading #64

Closed wangyida closed 1 year ago

wangyida commented 1 year ago

Problem:

loading all training images into pytorch tensor is not necessary, which may lead to OOM error when training data is with a large scale.

Solution:

This PR makes it possible to further lower img_downscale ratio at least twice then the initial limit by loading all images into numpy array before batched training for COLMAP data format. By replacing COLMAP dataloader from loading images into tensors to numpy array, the GPU memory could be saved a lot, especially when your training dataset is with a high resolution, i.e. 2000 x 2000, or with a large amount, i.e. > 1000 images.

Such changed dataloader can perform a high resolution training with certain data (1000 images - 2000 x 2000 pixels), which is not doable in the original codes.

The training speed is not influenced from my opinion and the accuracy is the same from mipnerf360 data.

bennyguo commented 1 year ago

Hi @wangyida ! It makes sense as loading all images to gpu will consume large memory, but it'll also bring additional data transfer budget. I think it's better to add a load_data_on_gpu option to the dataset and let the user choose whether to transfer data on the fly or not. Could you add this option to the dataset and the corresponding config files? Besides, I would prefer to load them as torch CPU tensors instead of numpy arrays to keep a consistent style between the two options. Would you please modify this too? Thanks for your contribution!

wangyida commented 1 year ago

Sure I can modify it accordingly as you have suggested, thanks for sharing your nice work by the way.

Hi @wangyida ! It makes sense as loading all images to gpu will consume large memory, but it'll also bring additional data transfer budget. I think it's better to add a load_data_on_gpu option to the dataset and let the user choose whether to transfer data on the fly or not. Could you add this option to the dataset and the corresponding config files? Besides, I would prefer to load them as torch CPU tensors instead of numpy arrays to keep a consistent style between the two options. Would you please modify this too? Thanks for your contribution!

wangyida commented 1 year ago

Can you review the changes again @bennyguo , the system structures are kept unchanged now, just few lines changes in the dataloader

wangyida commented 1 year ago

Done thanks

bennyguo commented 1 year ago

Great! Thanks:)