AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
142.5k stars 26.88k forks source link

[Bug]: Not enough Video MEM error when generating image after switching between SD v2 and v1 models within X/Y plot. #6331

Closed przemoc closed 1 year ago

przemoc commented 1 year ago

Is there an existing issue for this?

What happened?

RuntimeError: Not enough memory when generating image after switching between SD v2 and v1 models within X/Y plot. But it works when changing models manually.

Steps to reproduce the problem

  1. Choose SD v2 model.
  2. Generate X/Y plot. X: Nothing. Y: Checkpoint name: v2-model-name, v1-model-name.

When using v1 first for checkpoints, then generate X/Y plot twice to get fail.

It's possible that for GPUs with VRAM > 6GB it is harder to reach this error.

What should have happened?

Video MEM related to given SD version should be properly freed when switching between SD v2 and SD v1 models, regardless of the way this change is triggered (manually or automatically like via X/Y plot).

Commit where the problem happens

bc43293c640aef65df3136de9e5bd8b7e79eb3e0

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

No response

Additional information, context and logs

Environment (Windows 11 Pro 10.0.22621, WSL2, 24GB RAM allocated for WSL, RTX 2060 w/ 6GB VRAM):

PS C:\Users\przemoc> wsl --version
WSL version: 1.0.3.0
Kernel version: 5.15.79.1
WSLg version: 1.0.47
MSRDC version: 1.2.3575
Direct3D version: 1.606.4 
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22621.963
(automatic) (miniconda3-latest) przemoc@NUC11PHKi7C002:~/python/stable-diffusion-workspace/AUTOMATIC1111/stable-diffusion-webui$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:        22.04
Codename:       jammy
(automatic) (miniconda3-latest) przemoc@NUC11PHKi7C002:~/python/stable-diffusion-workspace/AUTOMATIC1111/stable-diffusion-webui$ free -m
               total        used        free      shared  buff/cache   available
Mem:           24037         375       16515           1        7146       23313
Swap:           6144          93        6050
(automatic) (miniconda3-latest) przemoc@NUC11PHKi7C002:~/python/stable-diffusion-workspace/AUTOMATIC1111/stable-diffusion-webui$ nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 2060 (UUID: GPU-60a66d3b-32fa-97fc-185e-a7d72f30c37f)
(automatic) (miniconda3-latest) przemoc@NUC11PHKi7C002:~/python/stable-diffusion-workspace/AUTOMATIC1111/stable-diffusion-webui$ nvidia-smi
Wed Jan  4 22:43:59 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.75       Driver Version: 517.40       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0  On |                  N/A |
| N/A   43C    P8    12W /  N/A |    768MiB /  6144MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A        28      G   /Xwayland                       N/A      |
+-----------------------------------------------------------------------------+

Start with SD v1.5 model from the start.

(automatic) (miniconda3-latest) przemoc@NUC11PHKi7C002:~/python/stable-diffusion-workspace/AUTOMATIC1111/stable-diffusion-webui$ python launch.py --api
Python 3.10.6 (main, Oct  7 2022, 20:19:58) [GCC 11.2.0]
Commit hash: bc43293c640aef65df3136de9e5bd8b7e79eb3e0
Installing requirements for Web UI
Launching Web UI with arguments: --api
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [81761151] from /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(1): bad_prompt
Model loaded.
Warning: Bad ui setting value: img2img/Mask mode/value: Draw mask; Default value "Inpaint masked" will be used instead.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

Generate image in webui. No issue.

100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:09<00:00,  2.02it/s]
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:08<00:00,  2.39it/s]
{"prompt": "memory issue", "all_prompts": ["memory issue"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 2825692511, "all_seeds": [2825692511], "subseed": 1535075180, "all_subseeds": [1535075180], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "Euler a", "cfg_scale": 7, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_hash": "81761151", "seed_resize_from_w": 0, "seed_resize_from_h": 0, "denoising_strength": null, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["memory issue\nSteps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2825692511, Size: 512x512, Model hash: 81761151, Model: v1-5-pruned-emaonly, ENSD: 31337"], "styles": ["None", "None"], "job_timestamp": "20230104220523", "clip_skip": 1, "is_using_inpainting_conditioning": false}
Loading config from: /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v2-1_512-ema-pruned.yaml

Change model to SD v2.1 in webui.

LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 865.91 M params.
Loading weights [47c8ec7d] from /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v2-1_512-ema-pruned.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Textual inversion embeddings skipped(1): bad_prompt
Model loaded.

Generate image in webui. No issue.

100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.94it/s]
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.86it/s]
{"prompt": "memory issue", "all_prompts": ["memory issue"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 336515835, "all_seeds": [336515835], "subseed": 1153235318, "all_subseeds": [1153235318], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "Euler a", "cfg_scale": 7, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_hash": "47c8ec7d", "seed_resize_from_w": 0, "seed_resize_from_h": 0, "denoising_strength": null, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["memory issue\nSteps: 20, Sampler: Euler a, CFG scale: 7, Seed: 336515835, Size: 512x512, Model hash: 47c8ec7d, Model: v2-1_512-ema-pruned, ENSD: 31337"], "styles": ["None", "None"], "job_timestamp": "20230104220621", "clip_skip": 1, "is_using_inpainting_conditioning": false}

Change model back to SD v1.5 in webui.

LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [81761151] from /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(1): bad_prompt
Model loaded.

Generate X/Y plot in webui. X: Nothing. Y: Checkpoint name: v1-5-pruned-emaonly,v2-1_512-ema-pruned. No issue.

X/Y plot will create 2 images on a 1x2 grid. (Total steps to process: 40)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.48it/s]
Loading config from: /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v2-1_512-ema-pruned.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 865.91 M params.
Loading weights [47c8ec7d] from /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v2-1_512-ema-pruned.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Textual inversion embeddings skipped(1): bad_prompt
Model loaded.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:08<00:00,  2.38it/s]
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████| 40/40 [00:45<00:00,  1.13s/it]
{"prompt": "memory issue", "all_prompts": ["memory issue"], "negative_prompt": "", "all_negative_prompts": [""], "seed": 1271361060, "all_seeds": [1271361060], "subseed": 1415747460, "all_subseeds": [1415747460], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "Euler a", "cfg_scale": 7, "steps": 20, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_hash": "81761151", "seed_resize_from_w": 0, "seed_resize_from_h": 0, "denoising_strength": null, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["memory issue\nSteps: 20, Sampler: Euler a, CFG scale: 7, Seed: 1271361060, Size: 512x512, Model hash: 81761151, Model: v1-5-pruned-emaonly, ENSD: 31337"], "styles": ["None", "None"], "job_timestamp": "20230104220844", "clip_skip": 1, "is_using_inpainting_conditioning": false}

Change model to SD v2.1 in webui. Generate X/Y plot in webui. X: Nothing. Y: Checkpoint name: v2-1_512-ema-pruned,v1-5-pruned-emaonly. Fails when generating image using SD v2.1 model after switching from SD v1.5 model.

X/Y plot will create 2 images on a 1x2 grid. (Total steps to process: 40)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.01it/s]
LatentDiffusion: Running in eps-prediction mode██████████████▌                                        | 20/40 [00:05<00:04,  4.05it/s]
DiffusionWrapper has 859.52 M params.
Loading weights [81761151] from /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(1): bad_prompt
Model loaded.
  0%|                                                                                                          | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('memory issue', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 4, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, False, False, False, False, '', '', 0, '', 9, 'v2-1_512-ema-pruned,v1-5-pruned-emaonly', True, False, False) {}
Traceback (most recent call last):
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img
    processed = modules.scripts.scripts_txt2img.run(p, *args)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/scripts.py", line 328, in run
    processed = script.run(p, *script_args)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/scripts/xy_grid.py", line 436, in run
    processed = draw_xy_grid(
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/scripts/xy_grid.py", line 230, in draw_xy_grid
    processed:Processed = cell(x, y)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/scripts/xy_grid.py", line 414, in cell
    res = process_images(pc)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/processing.py", line 473, in process_images
    res = process_images_inner(p)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/processing.py", line 580, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/processing.py", line 739, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 530, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 440, in launch_sampling
    return func()
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 530, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 338, in forward
    x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
    h = module(h, emb, context)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 324, in forward
    x = block(x, context=context[i])
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_hijack_checkpoint.py", line 4, in BasicTransformerBlock_forward
    return checkpoint(self._forward, x, context)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 235, in checkpoint
    return CheckpointFunction.apply(function, preserve, *args)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 96, in forward
    outputs = run_function(*args)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 262, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 99, in split_cross_attention_forward
    raise RuntimeError(f'Not enough memory, use lower resolution (max approx. {max_res}x{max_res}). '
RuntimeError: Not enough memory, use lower resolution (max approx. 384x384). Need: 0.0GB free, Have:0.0GB free

Start with SD v2.1 model from the start.

(automatic) (miniconda3-latest) przemoc@NUC11PHKi7C002:~/python/stable-diffusion-workspace/AUTOMATIC1111/stable-diffusion-webui$ python launch.py --api
Python 3.10.6 (main, Oct  7 2022, 20:19:58) [GCC 11.2.0]
Commit hash: bc43293c640aef65df3136de9e5bd8b7e79eb3e0
Installing requirements for Web UI
Launching Web UI with arguments: --api
No module 'xformers'. Proceeding without it.
Loading config from: /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v2-1_512-ema-pruned.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 865.91 M params.
Loading weights [47c8ec7d] from /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v2-1_512-ema-pruned.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Textual inversion embeddings skipped(1): bad_prompt
Model loaded.
Warning: Bad ui setting value: img2img/Mask mode/value: Draw mask; Default value "Inpaint masked" will be used instead.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

Generate X/Y plot in webui. X: Nothing. Y: Checkpoint name: v2-1_512-ema-pruned,v1-5-pruned-emaonly. Fails when generating image using SD v2.1 model after switching from SD v1.5 model.

X/Y plot will create 2 images on a 1x2 grid. (Total steps to process: 40)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:12<00:00,  1.65it/s]
LatentDiffusion: Running in eps-prediction mode██████████████▌                                        | 20/40 [00:14<00:07,  2.85it/s]
DiffusionWrapper has 859.52 M params.
Loading weights [81761151] from /home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(1): bad_prompt
Model loaded.
  0%|                                                                                                          | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('memory issue', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 4, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, False, False, False, False, '', '', 0, '', 9, 'v2-1_512-ema-pruned,v1-5-pruned-emaonly', True, False, False) {}
Traceback (most recent call last):
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img
    processed = modules.scripts.scripts_txt2img.run(p, *args)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/scripts.py", line 328, in run
    processed = script.run(p, *script_args)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/scripts/xy_grid.py", line 436, in run
    processed = draw_xy_grid(
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/scripts/xy_grid.py", line 230, in draw_xy_grid
    processed:Processed = cell(x, y)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/scripts/xy_grid.py", line 414, in cell
    res = process_images(pc)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/processing.py", line 473, in process_images
    res = process_images_inner(p)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/processing.py", line 580, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/processing.py", line 739, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 530, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 440, in launch_sampling
    return func()
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 530, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_samplers.py", line 338, in forward
    x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
    h = module(h, emb, context)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 324, in forward
    x = block(x, context=context[i])
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_hijack_checkpoint.py", line 4, in BasicTransformerBlock_forward
    return checkpoint(self._forward, x, context)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 235, in checkpoint
    return CheckpointFunction.apply(function, preserve, *args)
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 96, in forward
    outputs = run_function(*args)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 262, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "/home/przemoc/.pyenv/versions/miniconda3-latest/envs/automatic/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/przemoc/git/github.com/AUTOMATIC1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 99, in split_cross_attention_forward
    raise RuntimeError(f'Not enough memory, use lower resolution (max approx. {max_res}x{max_res}). '
RuntimeError: Not enough memory, use lower resolution (max approx. 384x384). Need: 0.0GB free, Have:0.0GB free

Only this year I started playing with SD v2 and noticed this issue. I observed issue also with older commit like 11d432d92d63.

Originally I had 16GB RAM allocated for WSL (host has 32GB RAM), but I increased it to 24GB RAM to confirm the issue is related to Video MEM and not operational MEM.

catboxanon commented 1 year ago

Closing as stale.