frozoul / 4K-NeRF

Official implementation of arxiv paper "4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions"
379 stars 19 forks source link

Out of memory during training. #3

Open xyIsHere opened 1 year ago

xyIsHere commented 1 year ago

Dear authors,

Thank you for your great work and amazing results. I tried to reproduce the video results that you provided in this repo. Following your instructions, I finished the encoder pretraining and then I got the 'CUDA out of memory' error when jointly training the encoder and decoder. So could you share with me the GPU memory requirement for running this code?

zf-666 commented 1 year ago

I meet the same problem, have u solved it? thanku

xyIsHere commented 1 year ago

I meet the same problem, have u solved it? thanku

Not yet.

OvOtoQAQ commented 1 year ago

Same issue, and have you tried any other dataset? On blender dataset, e.g. lego, I came across several bugs too.

Spark001 commented 1 year ago

80G A100 is ok. It need 70+G GPU memory to render the whole 4K images.

frozoul commented 1 year ago

Dear authors,

Thank you for your great work and amazing results. I tried to reproduce the video results that you provided in this repo. Following your instructions, I finished the encoder pretraining and then I got the 'CUDA out of memory' error when jointly training the encoder and decoder. So could you share with me the GPU memory requirement for running this code?

Thank you for your feedback. The previous code will input a whole image into the decoder during inference time, and directly generate a 4K image in one step. It will cost a lot of GPU memory (~78 G). I have changed the default setting to slice image in inference, and now the GPU memory cost of inference is ~18G.

You can use the test_tile to control the size of slices (current size is 510) to reduce GPU memory cost further. See the readme for specific instructions.

frozoul commented 1 year ago

Same issue, and have you tried any other dataset? On blender dataset, e.g. lego, I came across several bugs too.

The training configs of blender dataset will be slightly different from LLFF. I have updated the training configs of blender in the readme, Thanks.

xyIsHere commented 1 year ago

Dear authors, Thank you for your great work and amazing results. I tried to reproduce the video results that you provided in this repo. Following your instructions, I finished the encoder pretraining and then I got the 'CUDA out of memory' error when jointly training the encoder and decoder. So could you share with me the GPU memory requirement for running this code?

Thank you for your feedback. The previous code will input a whole image into the decoder during inference time, and directly generate a 4K image in one step. It will cost a lot of GPU memory (~78 G). I have changed the default setting to slice image in inference, and now the GPU memory cost of inference is ~18G.

You can use the test_tile to control the size of slices (current size is 510) to reduce GPU memory cost further. See the readme for specific instructions.

Thanks a lot! I will make changes following your suggestions.