Closed VLAD-KONATA closed 4 months ago
@VLAD-KONATA This problem has been solved by @lin-tianyu through detaching the sample out the gradient graph. If you're still experiencing this issue, please give it a try. Thank you.
@VLAD-KONATA This problem has been solved by @lin-tianyu through detaching the sample out the gradient graph. If you're still experiencing this issue, please give it a try. Thank you.
Thank you and @lin-tianyu , it works successfully, appreciated🤗
GPU:nvidia Tesla P40, VRAM 24G during sample mission, VRAM cost 1-2G with --dpm_solver False --diffusion_steps 1000
when --dpm_solver True,--diffusion_steps 20, the VRAM will be stacked exponentially, result to the CUDA out of memory, and the sample mission eventually crash.
Is this a bug or normal?How much VRAM does dpm-solver actually need in total?