Closed speedracerlo closed 1 year ago
during generation I see this
Reusing loaded model majicmixRealistic_v5Preview.safetensors [a38fa861a2] to load mistoonAnime_v10.safetensors Calculating sha256 for K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\models\Stable-diffusion\mistoonAnime_v10.safetensors: a49140c6c58b7025f36fd7e206e0aa70f39f24940cd84e4f3589578239305b15 Loading weights [a49140c6c5] from K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\models\Stable-diffusion\mistoonAnime_v10.safetensors Loading VAE weights specified in settings: K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt Applying attention optimization: sdp-no-mem... done. Weights loaded in 7.6s (send model to cpu: 0.6s, calculate hash: 5.7s, load weights from disk: 0.1s, apply weights to model: 0.3s, load VAE: 0.2s, move model to device: 0.5s). 2023-10-06 00:39:18,611 - AnimateDiff - INFO - AnimateDiff process start. 2023-10-06 00:39:18,611 - AnimateDiff - INFO - You are using mm_sd_v15_v2.ckpt, which has been tested and supported. 2023-10-06 00:39:18,618 - AnimateDiff - INFO - Injecting motion module animatediffMotion_v15V2.ckpt into SD1.5 UNet middle block. 2023-10-06 00:39:18,619 - AnimateDiff - INFO - Injecting motion module animatediffMotion_v15V2.ckpt into SD1.5 UNet input blocks. 2023-10-06 00:39:18,619 - AnimateDiff - INFO - Injecting motion module animatediffMotion_v15V2.ckpt into SD1.5 UNet output blocks. 2023-10-06 00:39:18,619 - AnimateDiff - INFO - Setting DDIM alpha. 2023-10-06 00:39:18,621 - AnimateDiff - INFO - Injection finished. 2023-10-06 00:39:18,621 - AnimateDiff - INFO - Hacking lora to support motion lora 2023-10-06 00:39:18,621 - AnimateDiff - INFO - Hacking CFGDenoiser forward function. 2023-10-06 00:39:18,622 - AnimateDiff - INFO - Hacking ControlNet. ** Error running before_process: K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\scripts.py", line 611, in before_process script.before_process(p, script_args) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 53, in before_process self.cn_hacker.hack(params) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 607, in hack self.hack_cn() File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 132, in hack_cn from scripts.controlmodel_ipadapter import (PlugableIPAdapter, ModuleNotFoundError: No module named 'scripts.controlmodel_ipadapter'
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:01<00:00, 3.06s/it] Error completing request 3.13s/it] Arguments: ('task(56t0k304g4ciwxl)', 'masterpiece, a warrior walking in a cyberpunk city', 'EasyNegative', [], 20, 'DPM2 a Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000029E4A00A140>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000029E4A106350>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000029E4A107CA0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000029E4A106230>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000029E4A1065C0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 104, in hacked_processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\processing.py", line 875, in process_images_inner x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True) File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\processing.py", line 601, in decode_latent_batch raise e File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\processing.py", line 598, in decode_latent_batch devices.test_for_nans(sample, "vae") File "K:\Stable Diffusion Automatic1111 v2\stable-diffusion-webui\modules\devices.py", line 136, in test_for_nans raise NansException(message) modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
You are probably using a very old controlnet extension. Please update it to the current version
I have this same issue. I disabled all other extensions but it does not help.
also you should always add --no-half-vae to your command argument
Every time I run it after an error, I run it again and get the same result as you, but just restart sd again and it's ok!
most common solution is to restart the webui AND the app from the start. Not just the webui remember.
This extension hacks unet everything you run generation, and un-hack everything a generation finish. You should absolutely restart if you encounter an error.
Is there an existing issue for this?
Have you read FAQ on README?
What happened?
I downloaded the extension on A1111 and tried generating images with both txt2img and img2img but both are giving me these weird noisy results.
Steps to reproduce the problem
What should have happened?
Should have generated an animation.
Commit where the problem happens
webui: 1.60 extension: latest update
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
Console logs
Additional information
No response