Closed didoll-john closed 6 months ago
请问你是用的local gradio demo吗,可以试一下最新的代码看看能否成功推理
可以成功推理了!感谢!
why I have this?
File "/data2/home/dev_mf/DynamiCrafter/lvdm/modules/attention.py", line 118, in forward sim = sim.softmax(dim=-1) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB. GPU 0 has a total capacity of 79.15 GiB of which 7.39 GiB is free. Process 2615231 has 7.30 GiB memory in use. Process 2632653 has 7.30 GiB memory in use. Process 2633258 has 7.30 GiB memory in use. Including non-PyTorch memory, this process has 49.83 GiB memory in use. Of the allocated memory 23.36 GiB is allocated by PyTorch, and 25.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.
请问你是用的local gradio demo吗,可以试一下最新的代码看看能否成功推理
您好,我使用comfy ui进行生成,16G显存显示OOM,请问有办法解决吗
请问你是用的local gradio demo吗,可以试一下最新的代码看看能否成功推理
您好,我使用comfy ui进行生成,16G显存显示OOM,请问有办法解决吗
Did you use the ComfyUI here? They have the pruned version of the model weights, which supports image animation at a resolution of 1024x576 using only 10GB memory.
请问你是用的local gradio demo吗,可以试一下最新的代码看看能否成功推理
您好,我使用comfy ui进行生成,16G显存显示OOM,请问有办法解决吗
Did you use the ComfyUI here? They have the pruned version of the model weights, which supports image animation at a resolution of 1024x576 using only 10GB memory.
Thanks dude, it works!
why I have this?
File "/data2/home/dev_mf/DynamiCrafter/lvdm/modules/attention.py", line 118, in forward sim = sim.softmax(dim=-1) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB. GPU 0 has a total capacity of 79.15 GiB of which 7.39 GiB is free. Process 2615231 has 7.30 GiB memory in use. Process 2632653 has 7.30 GiB memory in use. Process 2633258 has 7.30 GiB memory in use. Including non-PyTorch memory, this process has 49.83 GiB memory in use. Of the allocated memory 23.36 GiB is allocated by PyTorch, and 25.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.
Try install library xformers
我试了1024模型,不管把video_length减到多少,都是OOM 我看到其他网友在reddit和其他一些平台上问这个问题,但是没有人给予正面的回答。 希望作者能给出1024模型需要的显存大小,或者是否有办法降低显存的使用。 谢谢。