NVlabs / instant-ngp

Instant neural graphics primitives: lightning fast NeRF and more
https://nvlabs.github.io/instant-ngp
Other
16.03k stars 1.93k forks source link

How to reduce "batch size" to save memory #928

Open HungNgoCT opened 2 years ago

HungNgoCT commented 2 years ago

Hi there.

Thanks for interesting work.

I am using GPU RTX 2070 super with 8GB vRam. It can train small number of images (<~80). However, cannot apply for larger number of images, such as 100 images, because of out of GPU memory.

Also, I cannot increase resolution (such as 1080x1920) during training time for observation due to out of GPU memory too.

I understood that we can reduce "batch size", or chunk size... to save GPU memory. Any one can help me to understand the point in the code to reduce "batch size" like that? Also, Is there any way to save network, load and render out put later to save memory?

Highly appreciate for any one can help.

vermouth599 commented 2 years ago

I confronted with the same issue, anyone got idea?

Frydesk commented 2 years ago

Any update about the issue?

carlzhang94 commented 1 year ago

refer to src/python_api.cu, if you use python script, you can try to add testbed.training_batch_size=1<<16 in script/run.py somewhere before training section, to reduce GPU memory. default setting may be 262144, 2^18 in my some case, reduce batch_size even train faster...