Closed Xyem closed 1 year ago
Just as an additional note, I can generate images without a split prompt providing I reduce the batch size significantly (e.g. 4).
Just tested adding --opt-split-attention-v1
as suggested in #8409 and it does indeed "fix the issue", allowing for the expected batch size (12) with an unsplit prompt. Hope this helps isolate the cause.
yeh use v1 , which should be default , sadly isnt
Closing as stale.
Is there an existing issue for this?
What happened?
When using an unsplit prompt (<76 tokens) and a Lora, a huge amount of VRAM (20-30GB) is allocated causing out of memory error (on my 12GB VRAM GPU).
Forcing the prompt to split allows for generations to work without issue (two or more
BREAK
s are required).Steps to reproduce the problem
What should have happened?
Commit where the problem happens
515bd85a015d2269d9e3c45ce88a0f4f7e965807
What platforms do you use to access the UI ?
Linux
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
a1111-sd-webui-tagcomplete clip-interrogator-ext custom-diffusion-webui depthmap2mask embedding-inspector model-keyword multi-subject-render openpose-editor sd-dynamic-prompts sd-infinity-grid-generator-script SD-latent-mirroring sd_smartprocess sdweb-merge-board sd-webui-ar sd-webui-controlnet seed_travel shift-attention stable-diffusion-webui stable-diffusion-webui-dataset-tag-editor stable-diffusion-webui-depthmap-script stable-diffusion-webui-images-browser stable-diffusion-webui-inspiration stable-diffusion-webui-prompt-travel stable-diffusion-webui-sonar stable-diffusion-webui-two-shot stable-diffusion-webui-wd14-tagger test_my_prompt training-picker ultimate-upscale-for-automatic1111 video_loopback_for_webui
Console logs
Additional information
This happens with other Lora, not just the one given in the example.