w4ffl35 / krita_stable_diffusion

A Stable Diffusion plugin for Krita
GNU General Public License v3.0
133 stars 6 forks source link

RuntimeError: CUDA out of memory #60

Closed basillicus closed 1 year ago

basillicus commented 1 year ago

Hi, I managed to install the addon, but I can not make it working as I get always this error: RuntimeError: CUDA out of memory

My graphic card is a Nvidia RTX A100 with 4GB of virtual RAM. I tried the Stable-Diffusion-webui, and there is a flag --lowvram that allows me to run stable diffusion in my graphic card.

Is there a posibility to add a similar flag so it could be possible to run the plugin in Krita with 4GB VRAM videocards?

I am running Krita 5.1 as an AppImage on Ubuntu 20.04

Interpause commented 1 year ago

The flag --lowvram is specific to some of the larger Stable Diffusion forks (i.e. AUTOMATIC1111) that has had a lot more man hours by a lot more people put in to implement it. It actually uses a lot of hackery to individually move parts of the model to & from the GPU.

If you are fine with switching things up, I made a Krita plugin that builds on top of AUTOMATIC1111's code and hence allows access to all of its features while supporting a Krita workflow. Upgrading isn't difficult either, see https://github.com/Interpause/auto-sd-krita/wiki/Quick-Switch-Using-Existing-AUTOMATIC1111-Install

If you aren't, maybe you can research into xformers or how exactly --lowvram works, implement it into the code, and then make a pull request. Its the magic of opensource.

basillicus commented 1 year ago

Thanks for your reply. Understood! I will try the plugin you shared.

Also, as you mention, I tried to understand how --lowvram works on AUTOMATIC1111 so I could try to implement it here, but I still need to learn a bit more about torch and how it handles processes. It is an idea I have in mind though.