Open xyIsHere opened 1 year ago
I meet the same problem, have u solved it? thanku
I meet the same problem, have u solved it? thanku
Not yet.
Same issue, and have you tried any other dataset? On blender dataset, e.g. lego, I came across several bugs too.
80G A100 is ok. It need 70+G GPU memory to render the whole 4K images.
Dear authors,
Thank you for your great work and amazing results. I tried to reproduce the video results that you provided in this repo. Following your instructions, I finished the encoder pretraining and then I got the 'CUDA out of memory' error when jointly training the encoder and decoder. So could you share with me the GPU memory requirement for running this code?
Thank you for your feedback. The previous code will input a whole image into the decoder during inference time, and directly generate a 4K image in one step. It will cost a lot of GPU memory (~78 G). I have changed the default setting to slice image in inference, and now the GPU memory cost of inference is ~18G.
You can use the test_tile
to control the size of slices (current size is 510) to reduce GPU memory cost further. See the readme for specific instructions.
Same issue, and have you tried any other dataset? On blender dataset, e.g. lego, I came across several bugs too.
The training configs of blender dataset will be slightly different from LLFF. I have updated the training configs of blender in the readme, Thanks.
Dear authors, Thank you for your great work and amazing results. I tried to reproduce the video results that you provided in this repo. Following your instructions, I finished the encoder pretraining and then I got the 'CUDA out of memory' error when jointly training the encoder and decoder. So could you share with me the GPU memory requirement for running this code?
Thank you for your feedback. The previous code will input a whole image into the decoder during inference time, and directly generate a 4K image in one step. It will cost a lot of GPU memory (~78 G). I have changed the default setting to slice image in inference, and now the GPU memory cost of inference is ~18G.
You can use the
test_tile
to control the size of slices (current size is 510) to reduce GPU memory cost further. See the readme for specific instructions.
Thanks a lot! I will make changes following your suggestions.
Dear authors,
Thank you for your great work and amazing results. I tried to reproduce the video results that you provided in this repo. Following your instructions, I finished the encoder pretraining and then I got the 'CUDA out of memory' error when jointly training the encoder and decoder. So could you share with me the GPU memory requirement for running this code?