Closed jungeun122333 closed 1 month ago
Hi, there, i don't know if there is anything else being loaded at the GPU at the same time, causing this error. You can have a check. A way to work around is the reduce the chunk_size
, try set it to 1, it doesn't affect the performance
I'm encountering the same error at the same location when I try with chunk_size=1
.
The only notable difference seems to be the amount of memory required:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 19.39 GiB (GPU 0; 23.69 GiB total capacity; 8.49 GiB already allocated; 14.59 GiB free; 8.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Have you had a chance to download your code from Git and run it yourself? It seems unusual to allocate over 19GB at once, and I’m curious if you’ve experienced this as well.
Hi, I ran the code myself before, and it worked fine. Can you please try adding this argument? --pipeline.diffusion_ckpt "jinggogogo/gaussctrl-sd15"
, and also reduce the number of reference views to 2 --pipeline.ref_view_num 2
.
Sorry, it was my fault. I just realized that I used non-preprocessed data. Thank you for your kind answer.
Dear authors, thank you for your impressive work.
I was trying to reproduce your code using the script code for the bear dataset. However, when I follow your script code,
It makes CUDA out of memory error, and it seems that it tried to allocate another 27G (!).
I'm confused since you said you used 24G NVIDIA RTX 56000, and I'm also using 24G NVIDIA RTX 3090.
Do you have any idea why this issue is happening? Any kind of advice would be very helpful.
This is the full error code