Open HungNgoCT opened 2 years ago
I confronted with the same issue, anyone got idea?
Any update about the issue?
refer to src/python_api.cu,
if you use python script, you can try to add testbed.training_batch_size=1<<16
in script/run.py somewhere before training section, to reduce GPU memory. default setting may be 262144, 2^18
in my some case, reduce batch_size even train faster...
Hi there.
Thanks for interesting work.
I am using GPU RTX 2070 super with 8GB vRam. It can train small number of images (<~80). However, cannot apply for larger number of images, such as 100 images, because of out of GPU memory.
Also, I cannot increase resolution (such as 1080x1920) during training time for observation due to out of GPU memory too.
I understood that we can reduce "batch size", or chunk size... to save GPU memory. Any one can help me to understand the point in the code to reduce "batch size" like that? Also, Is there any way to save network, load and render out put later to save memory?
Highly appreciate for any one can help.