AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.61k stars 27.03k forks source link

[Bug]: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. #12921

Open PhotiniDev opened 1 year ago

PhotiniDev commented 1 year ago

Is there an existing issue for this?

What happened?

Normally A1111 features work fine with SDXL Base and SDXL Refiner. But on 3 occasions over par 4-6 weeks I have had this same bug, I've tried all suggestions and A1111 troubleshoot page with no success. The only way I have successfully fixed it is with re-install from scratch. I run SDXL Base txt2img, works fine. Then I run SDXL Refiner img2img and receive the error regardless if I use "send to img2img" or "Batch img2img"

Error Message: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Screenshot (4245)

Steps to reproduce the problem

  1. Go to .... img2img
  2. Press .... Generate
  3. ... Receive Error Message Screenshot (4245)

What should have happened?

Normally when working, it will batch refine and generate all the images from the input directory into the output directory

Sysinfo

sysinfo-2023-08-31-18-35.txt

What browsers do you use to access the UI ?

Google Chrome

Console logs

venv "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: <none>
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [7440042bbd] from C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\sd_xl_refiner_1.0.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.7s (launcher: 3.4s, import torch: 4.7s, import gradio: 1.3s, setup paths: 1.0s, other imports: 1.2s, load scripts: 1.6s, create ui: 1.0s, gradio launch: 0.4s).
Creating model from config: C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 7.1s (load weights from disk: 1.8s, create model: 0.3s, apply weights to model: 1.4s, apply half(): 1.3s, move model to device: 1.8s, calculate empty prompt: 0.4s).
Will process 100 images, creating 1 new images for each.
  0%|                                                                                            | 0/6 [00:03<?, ?it/s]
*** Error completing request
*** Arguments: ('task(19bmqwbr6wil1q8)', 5, 'Photo of a scuba diving Hamster wearing a diving suit and googles surrounded by exotic fish and coral deep in the ocean', '', [], None, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.25, -1.0, -1.0, 0, 0, 0, False, 0, 1024, 1024, 1, 0, 0, 32, 0, 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\txt2img-images\\2023-08-30', 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\img2img-images', '', [], False, [], '', <gradio.routes.Request object at 0x0000021B4D7A2B30>, 0, True, False, False, False, 'base', '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}    Traceback (most recent call last):
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 226, in img2img
        process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 114, in process_batch
        proc = process_images(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 794, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 1381, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
        return func()
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 215, in forward
        devices.test_for_nans(x_out, "unet")
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 155, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

---

Additional information

1st time it happened was when nvidia notified me of a driver update.

The last time it happened was after I generated 100 images successfully using txt2img, it generated all 100 images, but the ui froze up for 10 minutes before I manually closed the ui and cmd window and hasn't worked since. I will have to re-install to get it working again.

I have just noticed my PC has switched to game ready driver, but normally I use Studio Driver

PhotiniDev commented 1 year ago

Just tested with Studio Driver, and still not working, will reinstall to get working.

Ainaemaet commented 1 year ago

Same issue occasionally, please let us know if a reinstall does it for you.

nekhtiari commented 1 year ago

Having the same issue as well since the new update to 1.6 :(

camaxide commented 1 year ago

Is there an existing issue for this?

  • [x] I have searched the existing issues and checked the recent builds/commits

What happened?

Normally A1111 features work fine with SDXL Base and SDXL Refiner. But on 3 occasions over par 4-6 weeks I have had this same bug, I've tried all suggestions and A1111 troubleshoot page with no success. The only way I have successfully fixed it is with re-install from scratch. I run SDXL Base txt2img, works fine. Then I run SDXL Refiner img2img and receive the error regardless if I use "send to img2img" or "Batch img2img"

Error Message: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Screenshot (4245)

Steps to reproduce the problem

  1. Go to .... img2img
  2. Press .... Generate
  3. ... Receive Error Message Screenshot (4245)

What should have happened?

Normally when working, it will batch refine and generate all the images from the input directory into the output directory

Sysinfo

sysinfo-2023-08-31-18-35.txt

What browsers do you use to access the UI ?

Google Chrome

Console logs

venv "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: <none>
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [7440042bbd] from C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\sd_xl_refiner_1.0.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.7s (launcher: 3.4s, import torch: 4.7s, import gradio: 1.3s, setup paths: 1.0s, other imports: 1.2s, load scripts: 1.6s, create ui: 1.0s, gradio launch: 0.4s).
Creating model from config: C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 7.1s (load weights from disk: 1.8s, create model: 0.3s, apply weights to model: 1.4s, apply half(): 1.3s, move model to device: 1.8s, calculate empty prompt: 0.4s).
Will process 100 images, creating 1 new images for each.
  0%|                                                                                            | 0/6 [00:03<?, ?it/s]
*** Error completing request
*** Arguments: ('task(19bmqwbr6wil1q8)', 5, 'Photo of a scuba diving Hamster wearing a diving suit and googles surrounded by exotic fish and coral deep in the ocean', '', [], None, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.25, -1.0, -1.0, 0, 0, 0, False, 0, 1024, 1024, 1, 0, 0, 32, 0, 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\txt2img-images\\2023-08-30', 'C:\\Users\\Mono\\Desktop\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\outputs\\img2img-images', '', [], False, [], '', <gradio.routes.Request object at 0x0000021B4D7A2B30>, 0, True, False, False, False, 'base', '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}    Traceback (most recent call last):
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 226, in img2img
        process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\img2img.py", line 114, in process_batch
        proc = process_images(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 794, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\processing.py", line 1381, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
        return func()
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 434, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_samplers_kdiffusion.py", line 215, in forward
        devices.test_for_nans(x_out, "unet")
      File "C:\Users\Mono\Desktop\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 155, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

---

Additional information

1st time it happened was when nvidia notified me of a driver update.

The last time it happened was after I generated 100 images successfully using txt2img, it generated all 100 images, but the ui froze up for 10 minutes before I manually closed the ui and cmd window and hasn't worked since. I will have to re-install to get it working again.

I have just noticed my PC has switched to game ready driver, but normally I use Studio Driver

now that is strange... this is what I just did!! I tried yesterday.. 1 image txt2img - then I upscaled it with img2img.. using the same settings I then set 100 images to render txt2img overnight - in the morning all images was done, but ui was not responding to clicks.. had to close it and repoen - then tried to upscale one with same settings as yesterday - dont work any more - gui broken.

A1111 really need to get things working with sdxl - I had no issues with comfyui (but I like the workflow better in A1111)

wejk-ewjslkj commented 1 year ago

Same issue here, ran equal setup in comfyui successfully. any idea? Best regards

LockMan007 commented 1 year ago

try this set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram

curtwagner1984 commented 1 year ago

try this set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram

Could you elaborate on what this actually does? Because it seems to me that disabling the nan check isn't a good idea. If something is supposed to be there and it isn't, and we're just ignoring the check, it doesn't actually resolve the issue.

andyyeh75 commented 1 year ago

@LockMan007 Sorry, actually it's still not working on my side.

Gouvernathor commented 1 year ago

@LockMan007 adding only the --disable-nan-check to webui-user.bat generates only black images. Adding the whole thing as you wrote it got me this :

Traceback (most recent call last):
  File "D:\stable-diffusion-webui\launch.py", line 48, in <module>
    main()
  File "D:\stable-diffusion-webui\launch.py", line 44, in main
    start()
  File "D:\stable-diffusion-webui\modules\launch_utils.py", line 436, in start
    webui.webui()
  File "D:\stable-diffusion-webui\webui.py", line 112, in webui
    create_api(app)
  File "D:\stable-diffusion-webui\webui.py", line 22, in create_api
    api = Api(app, queue_lock)
          ^^^^^^^^^^^^^^^^^^^^
  File "D:\stable-diffusion-webui\modules\api\api.py", line 211, in __init__
    api_middleware(self.app)
  File "D:\stable-diffusion-webui\modules\api\api.py", line 148, in api_middleware
    @app.middleware("http")
     ^^^^^^^^^^^^^^^^^^^^^^
  File "D:\stable-diffusion-webui\venv\Lib\site-packages\fastapi\applications.py", line 895, in decorator
    self.add_middleware(BaseHTTPMiddleware, dispatch=func)
  File "D:\stable-diffusion-webui\venv\Lib\site-packages\starlette\applications.py", line 139, in add_middleware
    raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started

Edit : dropping the --api part seems to have fixed it on my end. Actually it's --no-half-vae that solves the initial nan bug.

Flerndip commented 1 year ago

This is happening constantly. --no-half-vae doesn't fix it and disabling nan-check just produces black images when it effs up. switching between checkpoints can sometimes fix it temporarily but it always returns.

Someone said they fixed this bug by using launch argument --reinstall-xformers and I tried this and hours later I have not re-encountered this bug.

LockMan007 commented 1 year ago

try this set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram

Could you elaborate on what this actually does? Because it seems to me that disabling the nan check isn't a good idea. If something is supposed to be there and it isn't, and we're just ignoring the check, it doesn't actually resolve the issue.

I don't understand it, I just know that is what I do and it works. You can try adding in parts or all and see if it works. I have it set a custom port for various reasons.

This is what my customized copy of the .bat file I run looks like:

@echo off

set PYTHON="D:\AI\Python\Python310\python.exe"
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--api --no-half-vae --disable-nan-check --xformers --opt-split-attention --medvram --port 42000
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512

call webui.bat

The path to PYTHON may not need to be set for you and would depend on where you have it anyway.

Sgrikkardo commented 1 year ago

I have the same problem, as I try to do an img2img with SDXL I get "NansException: A tensor with all NaNs was produced in Unet. ". The error is specific to SDXL, it's not present with 1.5 or others checkpoints. I tried to change every parameter, to no avail.

shirayu commented 1 year ago

This may help you.

(Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time) https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13020#issuecomment-1704382917

Sgrikkardo commented 1 year ago

This may help you.

(Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time) #13020 (comment)

I tried it, it worked but only once. I managed to obtain an img2img with SDXL, the second time I tried it was back to a NaN, I couldn't get another img2img no matter what.

JoejoeC commented 1 year ago

Open the root directory stable-diffusion webui, locate webui.bat, right-click to open editing

In set ERROR_ Adding the following line under 'REPORTING=FALSE' to save and restart

set COMMANDLINE_ARGS=--no-half --disable-nan-check

riperbot commented 1 year ago

1) you need to update some things. I don't use xformers, but in my "webui-user_update.bat":

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--reinstall-torch --reinstall-xformers --xformers
git pull
call webui.bat

2) A. I have RTX 3090ti 24GB (with Resizable BAR activated on my ASUS motherboard) + 64GB RAM and I couldn't solve this problem for a long time, but then I did. We need to load 2 checkpoints base and refiner. So as Shirayu correctly pointed out where to look, go to "Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time" and set 2 instead of 1. Then restart the browser and terminal. voila, everything works.

B. Also, to speed up the process, I unchecked "Upcast cross attention layer to float32" in the same "Stable Diffusion" setting. C. And also set "Settings -> Optimization -> Cross attention optimization -> sdk-no-mem -scaled dot product without memory efficient attention" => B and C allow you to speed up the calculation process considerably!

3) I always update extensions. After update always close browser and terminal. 4) in "webui-user.bat":

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--xformers
call webui.bat

5) I noticed that if I work after PC is out of sleep mode, then: VRAM is detected with bad sectors and therefore the 2nd generation in a row gives an error. But if I restart PC, then bad sectors are not detected in VRAM and everything works as it should. Thank you Windows)

More info:

Using Windows 10, Firefox and Vivaldi browser (both working).

I've tested on "dreamshaperXL10_alpha2Xl10.safetensors" - as SD checkpoint, "sdxl-vae-fp16-fix .safetensors" - as SD VAE, "sdXL_v10RefinerVAEFix.safetensors" - as Refiner.

Also, I have: version: v1.6.0 (AUTOMATIC1111) • python: 3.10.11 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2

"xformers" - is just an option in "Cross attention optimization" that you can select if you want to test.

P.S. ComfyUI has no such problems, but you have to get used to this interface)

raspitakesovertheworld commented 1 year ago

This may help you. (Setting -> Stable Diffusion -> Maximum number of checkpoints loaded at the same time) #13020 (comment)

I tried it, it worked but only once. I managed to obtain an img2img with SDXL, the second time I tried it was back to a NaN, I couldn't get another img2img no matter what.

That does not work and is not the cause of the error, I have had it set to 2 for a long time and I still get the error.

What is now the solution for this bug? All the proposed solutions don't work.

joli-coeur50 commented 12 months ago

same issue here

joli-coeur50 commented 12 months ago

Settings > Stable Diffusion > check "Upcast cross attention layer to float32"

raspitakesovertheworld commented 12 months ago

No, that setting is already set and it still does not work, getting the same error.

On Mon, Dec 4, 2023 at 8:54 AM joli-coeur50 @.***> wrote:

Settings > Stable Diffusion > check "Upcast cross attention layer to float32"

— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12921#issuecomment-1839068559, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA6BLPO7EZB5AWXX4R6CVD3YHX55XAVCNFSM6AAAAAA4GP4HVSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZZGA3DQNJVHE . You are receiving this because you commented.Message ID: @.***>

Zq5437 commented 11 months ago

Mac M1 Pro , I encounter the same problem and I try to input ./webui.sh --no-half, and I manage to fix this problem ! After researching the related info , I think it might because Mac doesn't support what is called "half type", this command argument is used for the cancellation. I hope this info is useful to you!

截屏2023-12-08 11 29 10
pickou commented 11 months ago

meet the same issue either.

t-xl commented 11 months ago

I had the same problem.

eduardonba1 commented 11 months ago

Same here

nathanshipley commented 11 months ago

Just ran into this with img2img using any SDXL checkpoint in 1.7. Launching with --no-half fixes it in in Linux here.

FWIW, Upcast cross attention layer to float32 did not make a difference. --disable-nan-check just generated black images.

Sgrikkardo commented 11 months ago

The problem is still present in 1.7. As previously pointed out, --no-half prevents the NaNs, but not having access to fp16 calculations is a problem which is still not addressed. For now I just generate a small random image in txt2img and then I can use img2img in half precision with no errors, but it's a workaround, not a solution

Gokhalesh commented 11 months ago

Sometimes Changing models works , but there is no permanent solution to this

tmheath commented 11 months ago

This is ongoing with the latest install script... Running gentoo none of the mentioned fixes work in seemingly any combination.

dadadaing10 commented 11 months ago

I have referred to the suggestions in the comments, but the error still occurs. Is there any way to solve it? Thank you very much!

Issue: modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. ![Uploading 螢幕擷取畫面 2023-12-25 093710.jpg…]()

AngelTs commented 11 months ago

I confirm this bug too- for SDXL models do any (empty) txt2img before to do img2img fixes it!

eduardonba1 commented 11 months ago

This bug may be caused by the animatediff extension, even if you have not enabled the checkbox I uninstalled it and it worked

AngelTs commented 11 months ago

animatediff

Not quite- I deleted animatediff extension "sd-webui-animatediff" from extension Auto1111's dir, also remove all occurred animatediff strings in two files- "config.json" and in "ui-config.json". The bug continue to occures!

TenguMask commented 10 months ago

Not a universal tip, I'd even say that it might be a specific one, yet I've noticed that I have the same problem (with my oldahh GTX 960 4gb) when I overclock it 10-15% over it's baseline (especially the core clock)

CHRSKURO commented 10 months ago

I confirm this bug too- for SDXL models do any (empty) txt2img before to do img2img fixes it!

It works, thx!!

reponum8 commented 10 months ago

Last time I checked, settings -> ae -> Ae automatic to off, and convert all nan to fp32 on. [✓] Nan is a checker it's only checking and doesn't impact your image.

Zihann73 commented 10 months ago

I use v2-1_768-ema-pruned.ckpt and met this issue every time in text2img. I fixed it by change this property: Settings -> Upcast cross attention layer to float32

robertdeanmoore commented 10 months ago

I confirm this bug too- for SDXL models do any (empty) txt2img before to do img2img fixes it!

This is a workaround. I did this and then the reActor and SDXL base/refiner worked in img2img

WillyamBradberry commented 9 months ago

This is happening constantly. --no-half-vae doesn't fix it and disabling nan-check just produces black images when it effs up. switching between checkpoints can sometimes fix it temporarily but it always returns.

Someone said they fixed this bug by using launch argument --reinstall-xformers and I tried this and hours later I have not re-encountered this bug.

Don't try this. All your SD will be ruined with this reinstall. And the rest of the day will be wasted trying to catch this freaking bug.

haoqipaopao commented 9 months ago

报错

cabincrewaigirls commented 9 months ago

I added this shit to the webui-user.bat file. For the moment is working for me

@echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--disable-nan-check

call webui.bat

davidscid commented 8 months ago

before generate img2img, go to txt2img to generate a picture.and than go back to img2img and can be used. its work for me.

finally I find a solution to sove the problem for

"Error Message: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."

trip54654 commented 8 months ago

Didn't read most of the issue, but for me this was caused by web-ui writing to checkpoint files, modifying them, and sometimes corrupting them. After I made all checkpoint files read-only, this never happened again to me.

Edit: Pretty sure that's bullshit, it was bad RAM. Still not sure why making the checkpoint file read-only seemingly fixed it.

yisuanwang commented 8 months ago

我也确认了这个错误 - 对于 SDXL 模型,在执行 img2img 之前执行任何(空)txt2img 修复它!

好神奇,这对于我也有效。

shijinghuihub commented 2 months ago

After the error is reported in the img2img go to text2img and run a random 512 image. If it can run, go to the img2img and inpaint, and it will run normally.