AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
140.64k stars 26.61k forks source link

[Bug]: I am unable to use img2img with new GPU #14244

Open mcDandy opened 10 months ago

mcDandy commented 10 months ago

Is there an existing issue for this?

What happened?

I switchet PCs to one which is more capable. Img2img however stopped working. I tried enabling upconversion to float32 in options. That did not help.

Steps to reproduce the problem

  1. Go to img2img
  2. add image (any size)
  3. add prompt
  4. click generate

What should have happened?

generation based on image should begin

Sysinfo

sysinfo-2023-12-07-20-27.txt

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-2-g4afaaf8a
Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
Launching Web UI with arguments: --medvram --xformers
Loading weights [31e35c80fc] from F:\sd\webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 398.2s (prepare environment: 208.8s, import torch: 73.1s, import gradio: 25.3s, setup paths: 38.3s, import ldm: 0.3s, initialize shared: 2.9s, other imports: 29.0s, setup codeformer: 3.6s, setup gfpgan: 1.0s, list SD models: 0.7s, load scripts: 2.3s, load upscalers: 0.3s, reload hypernetworks: 0.2s, initialize extra networks: 0.6s, create ui: 4.5s, gradio launch: 8.9s).
Creating model from config: F:\sd\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
Model loaded in 260.1s (load weights from disk: 15.9s, load config: 0.1s, create model: 3.3s, apply weights to model: 239.5s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.9s).
  0%|                                                                                           | 0/16 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(nviyo9m3o7r58ch)', 0, 'add a car', '', [], <PIL.Image.Image image mode=RGBA size=768x768 at 0x183A4F7F9D0>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x00000183A4FAF1F0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "F:\sd\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\sd\webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\sd\webui\modules\img2img.py", line 208, in img2img
        processed = process_images(p)
      File "F:\sd\webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "F:\sd\webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\sd\webui\modules\processing.py", line 1528, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "F:\sd\webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\sd\webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "F:\sd\webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\sd\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\sd\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "F:\sd\system\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd\webui\modules\sd_samplers_cfg_denoiser.py", line 201, in forward
        devices.test_for_nans(x_out, "unet")
      File "F:\sd\webui\modules\devices.py", line 136, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

---

Additional information

works correctly on Nvidia Quadro P1000 (4GB VRAM)

AlUlkesh commented 9 months ago

Did you try the other option the error message suggests?

Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this.

mcDandy commented 9 months ago

I did not. It however somehow fixed itself... Only think I did was change plugins I have.

Dne čt 14. 12. 2023 16:23 uživatel AlUlkesh @.***> napsal:

Did you try the other option the error message suggests?

Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this.

— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14244#issuecomment-1856049197, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEN2KDZCUXFDXL2HBVF5NKLYJMKVPAVCNFSM6AAAAABALVVDRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJWGA2DSMJZG4 . You are receiving this because you authored the thread.Message ID: @.***>