AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
139.46k stars 26.44k forks source link

[Bug]: load sd model 2-1 got error #11916

Closed ucas010 closed 1 year ago

ucas010 commented 1 year ago

Is there an existing issue for this?

What happened?

NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Steps to reproduce the problem

1,huggingface safetensors: move to models/Stable-diffusion/ 2,Refresh the ring, then load the safetensors, image

3,img2img got error

What should have happened?

no bug

Version or Commit where the problem happens

f865d3e11647dfd6c7b2cdf90dde24680e58acd8

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

no

List of extensions

no

Console logs

*** Error completing request
*** Arguments: ('task(m361cud19qe6nsa)', 0, '(RAW photo, best quality), (realistic, photo-realistic:1.3), best quality ,masterpiece, an extremely delicate and beautiful, extremely detailed ,CG ,unity ,8k wallpaper, Amazing, finely detail, masterpiece,best quality, extremely detailed CG unity 8k wallpaper,absurdres, incredibly absurdres, huge filesize , ultra-detailed, highres, extremely detailed, iu,asymmetrical bangs,short bangs,bangs,pureerosface_v1,beautiful detailed girl, extremely detailed eyes and face, beautiful detailed eyes,\nlight on face,smile,bed, ', 'EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans,extra fingers,fewer fingers,((watermark:2)),(white letters:1), (multi nipples), lowres, bad anatomy, bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worst quality, low qualitynormal quality, jpeg artifacts, signature, watermark, username,bad feet, Multiple people,lowres,bad anatomy,bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, blurry,bad feet,cropped,poorly drawn hands,poorly drawn face,mutation,deformed,worst quality,low quality,normal quality,jpeg artifacts,signature,extra fingers,fewer digits,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,text,error,missing fingers,missing arms,missing legs,extra digit, ', [], <PIL.Image.Image image mode=RGBA size=2448x3264 at 0x7F60CEA52BF0>, None, None, None, None, None, None, 23, 0, 4, 0, 1, False, False, 6, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
    Traceback (most recent call last):
      File "/stable-diffusion-webui/modules/call_queue.py", line 55, in f
        res = list(func(*args, **kwargs))
      File "/stable-diffusion-webui/modules/call_queue.py", line 35, in f
        res = func(*args, **kwargs)
      File "/stable-diffusion-webui/modules/img2img.py", line 198, in img2img
        processed = process_images(p)
      File "/stable-diffusion-webui/modules/processing.py", line 620, in process_images
        res = process_images_inner(p)
      File "/stable-diffusion-webui/modules/processing.py", line 739, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/stable-diffusion-webui/modules/processing.py", line 1316, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 409, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 278, in launch_sampling
        return func()
      File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 409, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/stable/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "stable/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 190, in forward
        devices.test_for_nans(x_out, "unet")
      File "/stable-diffusion-webui/modules/devices.py", line 158, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Additional information

No response

dhwz commented 1 year ago

not a bug you just did not do what the exception recommends

ucas010 commented 1 year ago

thx,bug the results R not good , @dhwz image

Xavia1991 commented 1 year ago

look at some tutorials... you are doing pretty basic errors. in this one it is 100% the wrong aspect ratio.