Open NightMachinery opened 1 year ago
The notes on this page for stable diffusion 2 also recommend enabling this for low memory setups. The change from 512x512 to 768x768 means more people will be hitting memory limits. This could perhaps be added to the medvram and lowvram arguments?
"If you have low GPU RAM available, make sure to add a pipe.enable_attention_slicing() after sending it to cuda for less VRAM usage (to the cost of speed)"
+1
Anyone working on this?
Is there an existing issue for this?
What would your feature do ?
HuggingFace recommends using attention slicing on Apple Silicon (M1, M2). Is this supported in AUTOMATIC1111? Can it be added?
Proposed workflow
_
Additional information
No response