leejet / stable-diffusion.cpp

Stable Diffusion in pure C/C++
MIT License
2.91k stars 233 forks source link

Not enough space in the context's memory pool with ControlNet #178

Closed daniandtheweb closed 4 months ago

daniandtheweb commented 4 months ago

Whenever I try to use ControlNet with an input image bigger than 512x512 I keep getting ggml_new_object: not enough space in the context's memory pool (needed 18122976, available 16777216). I'm currently using a HIPBlas build and have plenty of vram available. Is this expected or is there a way to manually increase the context memory pool in the code?

fszontagh commented 4 months ago

Checkout my commit, i did some mod. to avoid this. https://github.com/leejet/stable-diffusion.cpp/pull/170/commits/6ee1c65bfdf112d7183cc3a9a967deffd36e9df2

The interesting part of this: params.mem_size

FYI: i didn't calculated nothing

daniandtheweb commented 4 months ago

I checked out that pr and it solves every issue I was having with the context's memory pool, amazing job. I'll be closing this issue then.

fszontagh commented 4 months ago

Thanks, but as i wrote, i didn't calculated nothing, so this modifications "as-is". I tested it with my desktop app at many times (only with CUDA and with 12GB VRAM), and it's working fine, but i'm sceptic with it. So, use with cauction please. I think to need some automatic pre-math magic to calculate these sizes from the model files if that's possible, and skip using the hardcoded mem_size parameters.

daniandtheweb commented 4 months ago

I still haven't looked carefully at the code so I'm not sure about how it works but maybe some check during compile time may optimize that parameter specifically for the gpu memory (even if that would be quite a bad choice for distributing the program).

fszontagh commented 4 months ago

I think it depends on the loaded model file size (eg. lora model file size, controlnet model file size etc..), that's need to be fit in there. I tested earlier some lora what i used with ComfyUI and the others, and modified the parameters untils my lora models fit in and don't fail at runtime. So, if we found eg. a lora model file what is large enough, maybe thats will fail too.