Open theoldsong opened 6 months ago
I see someone using the CPU uninstallation method to implement, how do you operate, thank you
reference: https://www.bilibili.com/video/BV1by411h7Vg/?vd_source=a25b27836b1cea089a7471b5f6f899cd at 42s
edit gradio_demo/app.py add code after line 130 (130 : pipe.unet_encoder.to(device))
pipe.enable_sequential_cpu_offload()
only need 8G
reference: https://www.bilibili.com/video/BV1by411h7Vg/?vd_source=a25b27836b1cea089a7471b5f6f899cd at 42s
edit gradio_demo/app.py add code after line 130 (130 : pipe.unet_encoder.to(device))
pipe.enable_sequential_cpu_offload()
only need 8G
Can you post the code here? That site only allows high resolution via registration...
reference: https://www.bilibili.com/video/BV1by411h7Vg/?vd_source=a25b27836b1cea089a7471b5f6f899cd at 42s
edit gradio_demo/app.py add code after line 130 (130 : pipe.unet_encoder.to(device))
pipe.enable_sequential_cpu_offload()
only need 8G
Thanks for sharing. However it works only once, after first generation it comes with the following error
NotImplementedError: Cannot copy out of meta tensor; no data!
Is there any solution to this? I did searched and this error comes with RAM offloading one such instance https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13087
@nitinmukesh
You can try remove pipe.to(device)
and pipe.unet_encoder.to(device)
. Then move pipe.enable_sequential_cpu_offload()
to Line 125.
Thank you @yoyoNTNU .
I am trying now.
@yoyoNTNU
Doesn't work. The code just hangs and Gradio UI never starts. :(
@nitinmukesh Try this code.(app.py) https://drive.google.com/file/d/1XVXXBbrEoh18Fq9F3-tfGxdAZxDWX2Em/view?usp=drive_link
I will try and post results. Thank you.
I have seen that many people have implemented this program under 8G VRAM, and the speed is still fast. The author does not provide a similar implementation method. How can I do it?