ArtVentureX / sd-webui-agent-scheduler

639 stars 68 forks source link

This extension breaks my entire install of Automatic1111... #169

Open Anonymous4280 opened 1 year ago

Anonymous4280 commented 1 year ago

It works fine for about 3 image gens and then it stops and hangs on whatever the current task is, forever. I can click generate again but it just puts it in the queue which obviously isn't going to work because it's still hung on the current task. The console prints out error messages. Here is a complete copy paste of the console from run to error when it hangs:

Already up to date. venv "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: v1.6.0-2-g4afaaf8a Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7 Launching Web UI with arguments: --deepdanbooru --xformers --autolaunch --theme dark Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 23.11.0, num models: 9 Using sqlite file: C:\Users\admin\Desktop\h\stable-diffusion-webui\extensions\sd-webui-agent-scheduler\task_scheduler.sqlite3 2023-11-07 18:31:10,327 - ControlNet - INFO - ControlNet v1.1.415 ControlNet preprocessor location: C:\Users\admin\Desktop\h\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-11-07 18:31:10,623 - ControlNet - INFO - ControlNet v1.1.415 Loading weights [543bcbc212] from C:\Users\admin\Desktop\h\stable-diffusion-webui\models\Stable-diffusion\Anything-V3.0-pruned.ckpt Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 30.7s (prepare environment: 6.4s, import torch: 11.5s, import gradio: 1.5s, setup paths: 1.5s, initialize shared: 0.4s, other imports: 1.0s, setup codeformer: 0.3s, load scripts: 5.3s, create ui: 1.1s, gradio launch: 0.8s, app_started_callback: 0.9s). Creating model from config: C:\Users\admin\Desktop\h\stable-diffusion-webui\configs\v1-inference.yaml Loading VAE weights specified in settings: C:\Users\admin\Desktop\h\stable-diffusion-webui\models\Stable-diffusion\Anything-V3.0.vae.pt Applying attention optimization: xformers... done. Model loaded in 24.4s (load weights from disk: 13.8s, load config: 0.1s, create model: 3.1s, apply weights to model: 5.3s, load VAE: 1.1s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.6s). 8%|██████▍ | 12/150 [00:28<05:28, 2.38s/it] Exception in thread MemMon: | 12/150 [00:24<05:00, 2.18s/it] Traceback (most recent call last): File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\memmon.py", line 53, in run free, total = self.cuda_mem_get_info() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info return torch.cuda.mem_get_info(index) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 618, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) RuntimeError: CUDA error: misaligned address CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Error completing request Arguments: ('task(av614cotlmiyqfa)', 'example prompt 1', 'example negative prompt 1', [], 150, 'Euler a', 1, 1, 7, 768, 512, False, 0.7, 10, '4x-AnimeSharp', 150, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000002041E3FFEE0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000204490B5FF0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000203F8204CD0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000203FA6323E0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\processing.py", line 1140, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, *extra_args) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 82, in forward x = layer(x, emb) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward return checkpoint( File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), args) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(args, kwargs) # type: ignore[misc] File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(ctx.input_tensors) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 273, in _forward h = self.out_layers(h) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward return originals.Conv2d_forward(self, input) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED


Traceback (most recent call last): File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, **kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\call_queue.py", line 77, in f devices.torch_gc() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\devices.py", line 51, in torch_gc torch.cuda.empty_cache() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 133, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: misaligned address CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Exception in thread Thread-31 (execute_task): Traceback (most recent call last): File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run self._target(*self._args, *self._kwargs) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\extensions\sd-webui-agent-scheduler\agent_scheduler\task_runner.py", line 346, in execute_task res = self.execute_task(task_id, is_img2img, task_args) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\extensions\sd-webui-agent-scheduler\agent_scheduler\task_runner.py", line 430, in execute_task return self.__execute_ui_task(task_id, is_img2img, ui_args) File "C:\Users\admin\Desktop\h\stable-diffusion-webui\extensions\sd-webui-agent-scheduler\agent_scheduler\task_runner.py", line 443, in __execute_ui_task shared.state.begin() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\shared_state.py", line 119, in begin devices.torch_gc() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\modules\devices.py", line 51, in torch_gc torch.cuda.empty_cache() File "C:\Users\admin\Desktop\h\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 133, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: misaligned address CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

I'm on Windows 10. I've tried: Restarting my computer Restarting Automatic1111 (By closing the console window and the browser tab, then starting it up again) Updating cuDNN DLLs in \venv\Lib\site-packages\torch\lib to version 8.9.5

Nothing is working so far.

artventuredev commented 1 year ago

Please try this comment: https://github.com/ArtVentureX/sd-webui-agent-scheduler/issues/126#issuecomment-1743991142

Anonymous4280 commented 1 year ago

Please try this comment: #126 (comment)

Made no difference.