Closed timmyhk852 closed 1 year ago
same webui 1.4.0
python launch.py --autolaunch --xformers --disable-nan-check
Python 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:51:25) [MSC v.1934 64 bit (AMD64)]
Version: v1.4.0-101-gb42c0ef6
Commit hash: b42c0ef6c31e38db52aecdb38908238dc81c5f01
Installing requirements
Launching Web UI with arguments: --autolaunch --xformers --disable-nan-check
[-] ADetailer initialized. version: 23.6.4, num models: 8
2023-06-28 11:01:32,105 - ControlNet - INFO - ControlNet v1.1.227
ControlNet preprocessor location: D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-06-28 11:01:32,242 - ControlNet - INFO - ControlNet v1.1.227
Loading weights [5998292c04] from D:\stable-diffusion-webui\models\Stable-diffusion\Counterfeit-V3.0_fp16-no-ema.safetensors
Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
DiffusionWrapper has 859.52 M params.
Startup time: 9.9s (import torch: 1.6s, import gradio: 1.8s, import ldm: 0.5s, other imports: 0.9s, list SD models: 0.3s, load scripts: 3.5s, create ui: 0.8s, gradio launch: 0.3s).
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\kl-f8-anime2.safetensors
preload_extensions_git_metadata for 16 extensions took 5.01s
Applying attention optimization: xformers... done.
Textual inversion embeddings loaded(6): bad_prompt, bad_prompt_version2, badhandv4, EasyNegative, EasyNegativeV2, negative_hand-neg
Model loaded in 6.7s (load weights from disk: 0.7s, create model: 0.9s, apply weights to model: 2.0s, apply half(): 1.1s, load VAE: 0.3s, move model to device: 1.5s).
2023-06-28 11:02:30,889 - ControlNet - INFO - Loading model: control_v11f1p_sd15_depth [cfd03158]
2023-06-28 11:02:31,430 - ControlNet - INFO - Loaded state_dict from [D:\stable-diffusion-webui\extensions\sd-webui-controlnet\models\control_v11f1p_sd15_depth.pth]
2023-06-28 11:02:31,430 - ControlNet - INFO - Loading config: D:\stable-diffusion-webui\extensions\sd-webui-controlnet\models\control_v11f1p_sd15_depth.yaml
2023-06-28 11:02:33,531 - ControlNet - INFO - ControlNet model control_v11f1p_sd15_depth [cfd03158] loaded.
2023-06-28 11:02:33,608 - ControlNet - INFO - Loading preprocessor: depth
2023-06-28 11:02:33,609 - ControlNet - INFO - preprocessor resolution = 512
Downloading: "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt" to D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads\midas\dpt_hybrid-midas-501f0c75.pt
100%|███████████████████████████████████████████████████████████████████████████████| 470M/470M [00:43<00:00, 11.4MB/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:09<00:00, 2.12it/s]
Total progress: 50%|█████████████████████████████████ | 20/40 [00:07<00:08, 2.36it/s]
0: 640x448 1 face, 73.0ms
Speed: 2.0ms preprocess, 73.0ms inference, 2.1ms postprocess per image at shape (1, 3, 640, 640)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 5.68it/s]
0: 640x448 1 face, 7.0ms
Speed: 2.0ms preprocess, 7.0ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 640)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 5.81it/s]
2023-06-28 11:03:36,689 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-06-28 11:03:36,692 - ControlNet - INFO - Loading preprocessor: depth
2023-06-28 11:03:36,693 - ControlNet - INFO - preprocessor resolution = 512
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:08<00:00, 2.35it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 40/40 [00:24<00:00, 2.34it/s]
0: 640x448 1 face, 11.5ms
Speed: 1.5ms preprocess, 11.5ms inference, 6.5ms postprocess per image at shape (1, 3, 640, 640)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 5.78it/s]
0: 640x448 1 face, 8.5ms
Speed: 1.0ms preprocess, 8.5ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 640)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 5.83it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 40/40 [00:30<00:00, 1.32it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 40/40 [00:30<00:00, 2.34it/s]
Working well
** Error running postprocess_image: D:\StabilityAI\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py Traceback (most recent call last): File "D:\StabilityAI\stable-diffusion-webui\modules\scripts.py", line 514, in postprocess_image script.postprocess_image(p, pp, script_args) File "D:\StabilityAI\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py", line 560, in postprocess_image is_processed |= self._postprocess_image(p, pp, args, n=n) File "D:\StabilityAI\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py", line 530, in _postprocess_image processed = process_images(p2) File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 620, in process_images res = process_images_inner(p) File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 743, in process_images_inner devices.test_for_nans(x, "vae") File "D:\StabilityAI\stable-diffusion-webui\modules\devices.py", line 158, in test_for_nans raise NansException(message) modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Version: v1.4.0
Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
Launching Web UI with arguments: No module 'xformers'. Proceeding without it. [-] ADetailer initialized. version: 23.6.4, num models: 8 2023-06-28 10:39:47,250 - ControlNet - INFO - ControlNet v1.1.227 ControlNet preprocessor location: D:\StabilityAI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-06-28 10:39:47,329 - ControlNet - INFO - ControlNet v1.1.227
Hi admin, you cannot produce the problem because you are using --disable-nan-check but I tried this argument, the pictures have a black square on person's face.
No. The --disable-nan-check
determines whether to raise an error or return a black image. You can see that there are no black squares in my example either.
Version: v1.4.0-101-gb42c0ef6 Commit hash: b42c0ef6c31e38db52aecdb38908238dc81c5f01 what is this?
Describe the bug
If I use control net, depth_midas, I will produce the error.
But when I dont enable the control net, adetailer works just fine.
Please fix the bug
Full console logs
*** Error running postprocess_image: C:\Stable Diffusion\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py Traceback (most recent call last): File "C:\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 514, in postprocess_image script.postprocess_image(p, pp, *script_args) File "C:\Stable Diffusion\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py", line 560, in postprocess_image is_processed |= self._postprocess_image(p, pp, args, n=n) File "C:\Stable Diffusion\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py", line 530, in _postprocess_image processed = process_images(p2) File "C:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 620, in process_images res = process_images_inner(p) File "C:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 1316, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "C:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 409, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "C:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 278, in launch_sampling return func() File "C:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 409, in <lambda> samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "C:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde denoised = model(x, sigmas[i] * s_in, **extra_args) File "C:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 190, in forward devices.test_for_nans(x_out, "unet") File "C:\Stable Diffusion\stable-diffusion-webui\modules\devices.py", line 158, in test_for_nans raise NansException(message) modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
List of installed extensions
No response
the same。need fix please
Same problem after the last WebUI update.
Version: v1.4.0 Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
Same for me too.
I tried using --disable-nan-check too and it will return a black image over the face.
When i use multidiffusion region prompt control extension, exactly same error occured.
Also have this problem. Definetly does NOT occur if adetailer is not enabled.
I get the same black box on all faces when using adetailer with controlnet.
For the NansException itself, see this page: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7633
After seeing several cases, I found that there are lora, vae, and embeddings that cause NaNs error only in inpainting.
I found out how to make mine work on accident so I don't know why or how to explain why it works but I rolled back automatic1111 to the branch (?) that you used in the pic you posted bing-su b42c0ef6c31e38db52aecdb38908238dc81c5f01 and it works fine for me now
It also works on my runpod on baf6946e06249c5af9851c60171692c44ef633e0
It looks like I'm on xformers 0.0.17 now instead of 0.0.20 but I'm not sure that matters either because I saw someone else with the same problem but without using xformers.
Just for reference I tried rolling back controlnet and adetailer separately, doing different combinations of controlnets, as well as adding --disable-nan-check and --no-half-vae but none of that worked for me.
Having the same error
I found out how to make mine work on accident so I don't know why or how to explain why it works but I rolled back automatic1111 to the branch (?) that you used in the pic you posted bing-su b42c0ef6c31e38db52aecdb38908238dc81c5f01 and it works fine for me now
It also works on my runpod on baf6946e06249c5af9851c60171692c44ef633e0
It looks like I'm on xformers 0.0.17 now instead of 0.0.20 but I'm not sure that matters either because I saw someone else with the same problem but without using xformers.
Just for reference I tried rolling back controlnet and adetailer separately, doing different combinations of controlnets, as well as adding --disable-nan-check and --no-half-vae but none of that worked for me.
how to roll back to this version please?
git checkout <commit hash>
You can roll back with this command. It appears that an update after the 27th is causing the problem. I rolled back to the 18th update, f7ae0e68c9c91cd95e28552ef930299286026cd7 version, and it worked.
git checkout <commit hash>
You can roll back with this command. It appears that an update after the 27th is causing the problem. I rolled back to the 18th update, f7ae0e68c9c91cd95e28552ef930299286026cd7 version, and it worked.
thanks solved. the problem is really the webui
Can we file an upstream issue with automatic1111 with relevant details so they fix it on their end?
Today I had no blackbox, nanexemptions error or any other problems with controlnet and using adetailer with it. It seems fixed on the current webui version. I even turned off the --disable-nan-check and no issues.
Describe the bug
If I use control net, depth_midas, I will produce the error.
But when I dont enable the control net, adetailer works just fine.
Please fix the bug
Full console logs
List of installed extensions
No response