Open mykeehu opened 1 year ago
It's getting more and more interesting, because Gradio throws itself after generating a few images, so it's more and more likely that it's not just a Colab problem, but rather a Gradio connection. For some reason it seems to be over-buffering and dropping the connection to the Colab. Maybe there is something RAM limit on the Gradio site?
Merging models is also affected. If one of the models is too large the webui crashes silently and ram usage goes back to about 800mb instantly. Isn't --lowram supposed to prevent this scenario by loading to vram though? Because it doesn't seem to work.
Today I can't switch models on colab because it reaches the RAM cap. I wanted to switch from realisticVisionV13 to the basic SD 1.5 (pruned-emaonly) and I couldn't. Memory optimization would be needed when switching models.
I restarted colab, now I set the default model to start with the --ckpt command. That's all it uses by default:
A "DiffusionWrapper has 859.52 M params." the memory usage almost doubles, as if the model is loaded twice.
Is there an existing issue for this?
What happened?
I started the system on the Colab and when I want to switch to another model (e.g. deliberate), the memory overflows because it then drops back below 1 GB and I get a CTRL+C message in the console. Need to fix something about the memory usage and there is nothing in the settings to cache the RAM. Maybe the hash code calculation is causing the memory usage to jump and should be optimized for, I don't know.
Steps to reproduce the problem
What should have happened?
Memory usage should remain below 10-12 GB RAM even when switching models
Commit where the problem happens
3715ece0adce7bf7c5e9c5ab3710b2fdc3848f39
What platforms do you use to access the UI ?
Other/Cloud
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
None
Console logs
Additional information