AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.79k stars 27.05k forks source link

[Bug]: Cant even generate my first image #15097

Closed fdefake closed 9 months ago

fdefake commented 9 months ago

Checklist

What happened?

Just put puppy on prompt, cat in negative prompt and an error ocurred.

Steps to reproduce the problem

  1. Use python 3.10.6 (I used conda) to run webui-user.bat with argument --lowvram 2. Put any prompt 3. Generate Image

What should have happened?

I think it should generate an image

What browsers do you use to access the UI ?

No response

Sysinfo

My laptop is new, its got 16gb of RAM and 2GB of VRAM

Console logs

(sdwebui) C:\Users\bto51\Desktop\stable-diffusion-webui>webui-user.bat
venv "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
Launching Web UI with arguments: --lowvram
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [6ce0161689] from C:\Users\bto51\Desktop\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: C:\Users\bto51\Desktop\stable-diffusion-webui\configs\v1-inference.yaml
Startup time: 16.0s (prepare environment: 3.5s, import torch: 5.3s, import gradio: 1.3s, setup paths: 1.2s, initialize shared: 2.0s, other imports: 0.7s, load scripts: 1.1s, create ui: 0.6s, gradio launch: 0.5s).
Applying attention optimization: Doggettx... done.
Model loaded in 7.0s (load weights from disk: 0.8s, create model: 0.5s, apply weights to model: 3.7s, apply half(): 0.7s, calculate empty prompt: 1.2s).
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]Exception in thread MemMon:
  0%|                                                                                           | 0/20 [00:09<?, ?it/s]
Traceback (most recent call last):
  File "C:\Users\bto51\anaconda3\envs\sdwebui\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\memmon.py", line 53, in run
    free, total = self.cuda_mem_get_info()
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info
    return torch.cuda.mem_get_info(index)
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info
    return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

*** Error completing request
*** Arguments: ('task(1fah0egwiltc92p)', <gradio.routes.Request object at 0x0000017F2E1636D0>, 'Puppy', 'Cat', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 785, in process_images
        res = process_images_inner(p)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 921, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 1257, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
        h = module(h, emb, context)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1557, in _call_impl
        args_result = hook(self, args)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\lowvram.py", line 52, in send_me_to_gpu
        module_in_gpu.to(cpu)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
        return self._apply(convert)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
        module._apply(fn)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
        module._apply(fn)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
        module._apply(fn)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
        param_applied = fn(param)
      File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
        return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    RuntimeError: CUDA error: the launch timed out and was terminated
    CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

---
Traceback (most recent call last):
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 77, in f
    devices.torch_gc()
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\devices.py", line 81, in torch_gc
    torch.cuda.empty_cache()
  File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache
    torch._C._cuda_emptyCache()
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Additional information

My laptop is new, Im too tired to create a clear issue, but I hope there's a mercyful soul who can help me and not a bunch of 40 year olds just dumping crap on me for making an issue with weird format. Whatever, I gotta use this tomorrow for a school work, good night, please help!

missionfloyd commented 9 months ago

I used conda

Install python 3.10 from here (remember to check add to path), and delete the venv folder. Run it without conda.

Multiple python versions can be installed at the same time (if you need them for other things.) You can set the one to use in webui-user.bat.

set PYTHON=python3.10
fdefake commented 9 months ago

Im checking that out, but thanks from advanced.

fdefake commented 9 months ago

Unfortunately, that did not work. I installed python 3.10.11 and ran webui-user.bat after that. Then it started to run, and it launched as normal, but when I tried to generate an image it gave me the same error as always. This is the console log: Creating venv in directory C:\Users\bto51\Desktop\stable-diffusion-webui\venv using python "C:\Users\bto51\AppData\Local\Programs\Python\Python310\python.exe" venv "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: v1.8.0 Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 Installing torch and torchvision Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121 Collecting torch==2.1.2 Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB) Collecting torchvision==0.16.2 Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl (5.6 MB) Collecting networkx Using cached https://download.pytorch.org/whl/networkx-3.2.1-py3-none-any.whl (1.6 MB) Collecting sympy Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB) Collecting fsspec Using cached fsspec-2024.2.0-py3-none-any.whl (170 kB) Collecting typing-extensions Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB) Collecting filelock Using cached filelock-3.13.1-py3-none-any.whl (11 kB) Collecting jinja2 Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) Collecting requests Using cached requests-2.31.0-py3-none-any.whl (62 kB) Collecting pillow!=8.3.*,>=5.3.0 Using cached https://download.pytorch.org/whl/pillow-10.2.0-cp310-cp310-win_amd64.whl (2.6 MB) Collecting numpy Using cached numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) Collecting MarkupSafe>=2.0 Using cached MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB) Collecting charset-normalizer<4,>=2 Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB) Collecting idna<4,>=2.5 Using cached idna-3.6-py3-none-any.whl (61 kB) Collecting urllib3<3,>=1.21.1 Using cached urllib3-2.2.1-py3-none-any.whl (121 kB) Collecting certifi>=2017.4.17 Using cached certifi-2024.2.2-py3-none-any.whl (163 kB) Collecting mpmath>=0.19 Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB) Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2024.2.0 idna-3.6 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.1.2+cu121 torchvision-0.16.2+cu121 typing-extensions-4.10.0 urllib3-2.2.1 WARNING: There was an error checking the latest version of pip. Installing clip Installing open_clip Installing requirements Launching Web UI with arguments: --lowvram no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Loading weights [6ce0161689] from C:\Users\bto51\Desktop\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors Creating model from config: C:\Users\bto51\Desktop\stable-diffusion-webui\configs\v1-inference.yaml Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 278.7s (prepare environment: 260.3s, import torch: 6.8s, import gradio: 2.1s, setup paths: 3.2s, initialize shared: 2.1s, other imports: 1.6s, load scripts: 1.2s, create ui: 0.7s, gradio launch: 0.4s). Applying attention optimization: Doggettx... done. Model loaded in 11.7s (load weights from disk: 0.6s, create model: 0.7s, apply weights to model: 7.6s, apply half(): 0.6s, calculate empty prompt: 2.1s). 0%| | 0/20 [00:00<?, ?it/s]Exception in thread MemMon: 0%| | 0/20 [00:12<?, ?it/s] Traceback (most recent call last): File "C:\Users\bto51\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\memmon.py", line 53, in run free, total = self.cuda_mem_get_info() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info return torch.cuda.mem_get_info(index) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) RuntimeError: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Error completing request Arguments: ('task(psigc8r2s36g1aa)', <gradio.routes.Request object at 0x00000236AEC53220>, 'Puppy', 'Cat', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\txt2img.py", line 110, in txt2img processed = processing.process_images(p) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 785, in process_images res = process_images_inner(p) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 921, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 1257, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, *extra_params_kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, extra_args) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input c_in, self.sigma_to_t(sigma), kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in call return self.__orig_func(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward h = module(h, emb, context) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1557, in _call_impl args_result = hook(self, args) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\lowvram.py", line 52, in send_me_to_gpu module_in_gpu.to(cpu) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1160, in to return self._apply(convert) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply param_applied = fn(param) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.


Traceback (most recent call last): File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, **kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 77, in f devices.torch_gc() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\devices.py", line 81, in torch_gc torch.cuda.empty_cache() File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

fdefake commented 9 months ago

Please help

fdefake commented 9 months ago

Oh, by the way, and if thats important I've got an Intel CPU and a NVIDIA GPU.

fdefake commented 9 months ago

I tried again, using python 3.10.6, and the same error as always happened: Creating venv in directory C:\Users\bto51\Desktop\stable-diffusion-webui\venv using python "C:\Users\bto51\AppData\Local\Programs\Python\Python310\python.exe" venv "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.8.0 Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 Installing torch and torchvision Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121 Collecting torch==2.1.2 Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB) Collecting torchvision==0.16.2 Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl (5.6 MB) Collecting fsspec Using cached fsspec-2024.2.0-py3-none-any.whl (170 kB) Collecting jinja2 Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) Collecting networkx Using cached https://download.pytorch.org/whl/networkx-3.2.1-py3-none-any.whl (1.6 MB) Collecting typing-extensions Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB) Collecting filelock Using cached filelock-3.13.1-py3-none-any.whl (11 kB) Collecting sympy Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB) Collecting requests Using cached requests-2.31.0-py3-none-any.whl (62 kB) Collecting numpy Using cached numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) Collecting pillow!=8.3.*,>=5.3.0 Using cached https://download.pytorch.org/whl/pillow-10.2.0-cp310-cp310-win_amd64.whl (2.6 MB) Collecting MarkupSafe>=2.0 Using cached MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB) Collecting certifi>=2017.4.17 Using cached certifi-2024.2.2-py3-none-any.whl (163 kB) Collecting charset-normalizer<4,>=2 Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB) Collecting urllib3<3,>=1.21.1 Using cached urllib3-2.2.1-py3-none-any.whl (121 kB) Collecting idna<4,>=2.5 Using cached idna-3.6-py3-none-any.whl (61 kB) Collecting mpmath>=0.19 Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB) Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2024.2.0 idna-3.6 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.1.2+cu121 torchvision-0.16.2+cu121 typing-extensions-4.10.0 urllib3-2.2.1 WARNING: There was an error checking the latest version of pip. Installing clip Installing open_clip Installing xformers Installing requirements Launching Web UI with arguments: --xformers --lowvram Loading weights [6ce0161689] from C:\Users\bto51\Desktop\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors Creating model from config: C:\Users\bto51\Desktop\stable-diffusion-webui\configs\v1-inference.yaml Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 300.6s (prepare environment: 283.4s, import torch: 6.5s, import gradio: 2.1s, setup paths: 2.6s, initialize shared: 2.2s, other imports: 1.5s, load scripts: 1.1s, create ui: 0.7s, gradio launch: 0.4s). Applying attention optimization: Doggettx... done. Model loaded in 11.4s (load weights from disk: 0.6s, create model: 0.7s, apply weights to model: 6.8s, apply half(): 0.6s, calculate empty prompt: 2.6s). 0%| | 0/20 [00:00<?, ?it/s]Exception in thread MemMon: Traceback (most recent call last): File "C:\Users\bto51\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\memmon.py", line 53, in run free, total = self.cuda_mem_get_info() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info return torch.cuda.mem_get_info(index) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) RuntimeError: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

0%| | 0/20 [00:14<?, ?it/s] Error completing request Arguments: ('task(8n4c7gkkuyvb64j)', <gradio.routes.Request object at 0x00000134313567D0>, 'Cat', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\txt2img.py", line 110, in txt2img processed = processing.process_images(p) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 785, in process_images res = process_images_inner(p) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 921, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\processing.py", line 1257, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, *extra_params_kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, extra_args) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input c_in, self.sigma_to_t(sigma), kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in call return self.__orig_func(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, *kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward h = module(h, emb, context) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1557, in _call_impl args_result = hook(self, args) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\lowvram.py", line 52, in send_me_to_gpu module_in_gpu.to(cpu) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1160, in to return self._apply(convert) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply param_applied = fn(param) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.


Traceback (most recent call last): File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, **kwargs) File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\call_queue.py", line 77, in f devices.torch_gc() File "C:\Users\bto51\Desktop\stable-diffusion-webui\modules\devices.py", line 81, in torch_gc torch.cuda.empty_cache() File "C:\Users\bto51\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

missionfloyd commented 9 months ago

You could try running it on the CPU instead, but it'll be slow. Of course, it'd be pretty slow with 2GB VRAM anyway.

fdefake commented 9 months ago

It's working! But I hope I can fix the GPU error sometime soon, it is REALLY slow to run on CPU. Whatever, thanks man, and if you find a solution, please send it to me! I'll keep the issue opened in case someone finds an answer.

fdefake commented 9 months ago

Damn, I was so close... I used the following arguments: "--lowvram --precision full --no-half --skip-torch-cuda-test" and it launched correctly (as usual, it used my 2GB of VRAM), I input the prompt, hit generate, it created an image, and when it was about to finish it gave me the same error as always.

fdefake commented 9 months ago

Really weird... I launched with the same arguments, input the prompt and... it worked, I'll close the issue, but if I run into another error, I'll guess I'll create a new one. Thanks @missionfloyd

fdefake commented 9 months ago

Goodbye.