AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
141.46k stars 26.73k forks source link

[Bug]: It gives an error when creating the image #10154

Closed kingmateo closed 1 year ago

kingmateo commented 1 year ago

Is there an existing issue for this?

What happened?

It was working properly. Suddenly he encountered a problem and gave the following message:

NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Time taken: 8.75sTorch active/reserved: 2346/2406 MiB, Sys VRAM: 4054/4096 MiB (98.97%)

system profile: Processor : i5-9400F RAM : 16 GB VGA : GTX 1650 SUPER

Steps to reproduce the problem

  1. open webui-user.bat
  2. open url http://127.0.0.1:7860/
  3. Type prompt
  4. Width 350
  5. Height 350
  6. CFG Scale 17
  7. Sampling method : Euler
  8. Sampling steps 80
  9. click Generate

What should have happened?

I don't know, I just know that everything was working fine and suddenly everything went wrong

Commit where the problem happens

Latest version (python: 3.10.6  •  torch: 1.13.1+cu117  •  xformers: N/A  •  gradio: 3.28.1  •  commit: 5ab7f213  •  checkpoint: 7fb0fb0b10)

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=

call webui.bat

List of extensions

animator_extension canvas-zoom CFG-Schedule-for-Automatic1111-SD ebsynth_utility sd_save_intermediate_images sd-webui-3d-open-pose-editor sd-webui-additional-networks sd-webui-controlnet sd-webui-deforum sd-webui-lora-block-weight sd-webui-supermerger sd-webui-text2video stable-diffusion-webui-composable-lora stable-diffusion-webui-depthmap-script training-picker video_loopback_for_webui

Console logs

venv "C:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 5ab7f213bec2f816f9c5644becb32eb72c8ffb89
Installing requirements
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.13.1+cu117.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
*** "Disable all extensions" option was set, will only load built-in extensions ***
Loading weights [7fb0fb0b10] from C:\AI\stable-diffusion-webui\models\Stable-diffusion\Cyberpunk_helper.safetensors
Creating model from config: C:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 16.5s (create model: 12.3s, apply half(): 0.8s, move model to device: 0.8s, load textual inversion embeddings: 2.6s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 28.8s (import torch: 3.4s, import gradio: 2.2s, import ldm: 0.8s, other imports: 2.0s, setup codeformer: 0.1s, load scripts: 1.1s, load SD checkpoint: 16.5s, create ui: 2.4s, gradio launch: 0.1s).
  0%|                                                                                           | 0/20 [00:05<?, ?it/s]
Error completing request
Arguments: ('task(lnbluxicx6tzh0m)', 'Cat', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
  File "C:\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\AI\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "C:\AI\stable-diffusion-webui\modules\processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "C:\AI\stable-diffusion-webui\modules\processing.py", line 669, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\AI\stable-diffusion-webui\modules\processing.py", line 887, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 167, in forward
    devices.test_for_nans(x_out, "unet")
  File "C:\AI\stable-diffusion-webui\modules\devices.py", line 156, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Additional information

I even enabled the Upcast cross attention layer to float32 option

I just failed to install Xform

w-e-w commented 1 year ago

read last ling of logs

pranshuthegamer commented 1 year ago

this is a problem. im encountering it on my gtx 1650 aswell. using arch linux

w-e-w commented 1 year ago

that is to be expected on 1650 --lowvram --precision full --no-half --xformers --medvram is possible but if you want to generate any larger than 512x512 you have to use --lowvram

failed to install? then nuke the install, get the new version of web ui

MagniDK commented 1 year ago

i have a 3060ti, 32gb of ram and a i7-10700KF and it gives this error thingie

amoomehrshad commented 1 year ago

hi guys pls help me i get an error when i click on interrogate clip *** Error interrogating Traceback (most recent call last): File "C:\Users\A\Desktop\A1111\modules\interrogate.py", line 193, in interrogate self.load() File "C:\Users\A\Desktop\A1111\modules\interrogate.py", line 121, in load self.blip_model = self.load_blip_model() File "C:\Users\A\Desktop\A1111\modules\interrogate.py", line 101, in load_blip_model blip_model = models.blip.blip_decoder(pretrained=files[0], image_size=blip_image_eval_size, vit='base', med_config=os.path.join(paths.paths["BLIP"], "configs", "med_config.json")) IndexError: list index out of range

GaneshBasu1 commented 1 year ago

Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. how to fix