ayaanzhaque / instruct-nerf2nerf

Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (ICCV 2023)
https://instruct-nerf2nerf.github.io/
MIT License
769 stars 64 forks source link

GPU memory usage: CUDA out of memory problems. #10

Closed frankjiang closed 1 year ago

frankjiang commented 1 year ago

The pipeline allocates pretty much GPU memories. I've tried to reduce the number of source images or use in2n-tiny, but not work for me. My device: Tesla V100 with 32G memory

Would you please offer some instructions to reduce the GPU memory occupation?

ayaanzhaque commented 1 year ago

What resolution are your input images? Could you train them on a smaller resolution? You will have to re-train your base nerfacto model, and for both the nerfacto training and the in2n training, add the following to the end of your command:

nerfstudio-data --downscale-factor {2, 4,, 6, 8}. Downscale by whatever factor is necessary.

frankjiang commented 1 year ago

Thanks a lot @ayaanzhaque ! This temporary solved my problem, but I still worry about the detail loss. Is there any other way to reduce the memory usage? Such as reduce the batch_size in training? Or the default settings is already configured as 1.

ayaanzhaque commented 1 year ago

As of now, I don't believe there is a way to reduce the memory any further. All the results in our paper are trained at about 512 resolution. The main bottleneck is the diffusion model, and as of now, it is quite costly.

frankjiang commented 1 year ago

I agree. I will work on it later. The speed and memory usage of diffusion models are still unacceptable. Anyway, thanks a lot again!

ayaanzhaque commented 1 year ago

haha ya, it is unfortunate that the diffusion process is so costly