Open dermesut opened 2 months ago
Can you share your A1111 setting? i.e. commandline args, and hardware used?
Can you share your A1111 setting? i.e. commandline args, and hardware used?
sure. here's the lines showing up, when the server starts:
" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.9.4 Commit hash: feee37d75f1b168768014e4634dcb156ee649c05 CUDA 12.1 Launching Web UI with arguments: --allow-code --api --port 7861 --device-id 1 --xformers --no-half-vae ControlNet preprocessor location: E:\ai_gh_repos\sd.webui\webui\extensions\sd-webui-controlnet\annotator\downloads 2024-06-03 13:23:51,219 - ControlNet - INFO - ControlNet v1.1.449 2024-06-03 13:23:51,351 - model_patcher_hook.py - INFO - init hooks applied INFO:model_patcher_hook.py:init hooks applied 2024-06-03 13:23:51,351 - model_patcher_hook.py - INFO - sample hooks applied INFO:model_patcher_hook.py:sample hooks applied 13:23:52 - ReActor - STATUS - Running v0.7.0-b7 on Device: CUDA Loading weights [e5f3cbc5f7] from E:\ai_gh_repos\sd.webui_190\webui\models\Stable-diffusion\PR\realisticVisionV60B1_v60B1VAE.safetensors Creating model from config: E:\ai_gh_repos\sd.webui_190\webui\configs\v1-inference.yaml 2024-06-03 13:23:53,315 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7861
To create a public link, set share=True
in launch()
.
Startup time: 19.3s (prepare environment: 4.9s, import torch: 4.0s, import gradio: 0.9s, setup paths: 1.1s, initialize shared: 1.8s, other imports: 0.8s, load scripts: 3.4s, create ui: 1.0s, gradio launch: 0.5s, add APIs: 0.7s).
Applying attention optimization: xformers... done.
2024-06-03 13:24:21,610 - model_patcher_hook.py - INFO - Init p.model_patcher.
INFO:model_patcher_hook.py:Init p.model_patcher.
2024-06-03 13:24:21,620 - model_patcher_hook.py - INFO - Init p.hr_model_patcher.
INFO:model_patcher_hook.py:Init p.hr_model_patcher.
Model loaded in 30.8s (load weights from disk: 0.3s, create model: 1.1s, apply weights to model: 26.2s, apply fp8: 0.9s, load textual inversion embeddings: 0.4s, calculate empty prompt: 1.8s).
"
maybe also interesting: after this error occurs, i deactivate iclight, try generate and get: "File "E:\ai_gh_repos\sd.webui_190\system\python\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead"
so, i can't generate anything with the current model anymore. this goes away, if i change the model. generating an image works again. but: if i switch back again to that model (that had iclight activated in the very first try), the same error appears again. meaning, i can only use that model again, if i restart the server. for this session the model has become unusable.
EDIT: maybe that's also relevant: "ControlNet preprocessor location: E:\ai_gh_repos\sd.webui\webui\extensions\sd-webui-controlnet\annotator\downloads" you see here, that that is a different path than the location of the webui instance itself. that's because i use symbolic links to folders that are located somewhere else. mainly folders that contain big files, like models etc., so i don't have to copy them for every version of webui/forge that i have installed.
i just updated ic-light and model patcher extensions in a1111: still the same error. in fact an additional info showed up now:
"
...
Traceback (most recent call last):
File "E:\ai_gh_repos\sd.webui_190\webui\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "E:\ai_gh_repos\sd.webui_190\webui\modules\processing.py", line 845, in process_images
res = process_images_inner(p)
File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, *kwargs)
File "E:\ai_gh_repos\sd.webui_190\webui\modules\processing.py", line 981, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\scripts\model_patcher_hook.py", line 92, in wrapped_sample_func
patcher.patch_model()
File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 399, in patch_model
self._patch_weights()
File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 388, in _patch_weights
new_weight = weight_patch.apply(new_weight, key)
File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 119, in apply
return self._patch_diff(model_weight, key)
File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 168, in _patch_diff
return model_weight + self.alpha self.weight.to(model_weight.device)
RuntimeError: Unsupported TypeMeta in ATen: class std::vector<unsigned long,class std::allocator
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\ai\_gh_repos\sd.webui_190\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "E:\ai\_gh_repos\sd.webui_190\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "E:\ai\_gh_repos\sd.webui_190\webui\modules\txt2img.py", line 105, in txt2img
with closing(p):
File "contextlib.py", line 340, in __exit__
File "E:\ai\_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\scripts\model_patcher_hook.py", line 69, in wrapped_close_func
return func(self, *args, **kwargs)
File "E:\ai\_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\scripts\model_patcher_hook.py", line 67, in wrapped_close_func
patcher.close()
File "E:\ai\_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 427, in close
assert len(self.weight_backup) == 0
AssertionError
"
here's what the cmd-window is showing (none of the options generates an image, it's always an error):
2024-05-31 00:25:22,752 - model_patcher_hook.py - INFO - Init p.model_patcher. INFO:model_patcher_hook.py:Init p.model_patcher. 2024-05-31 00:25:22,765 - model_patcher_hook.py - INFO - Init p.hr_model_patcher. INFO:model_patcher_hook.py:Init p.hr_model_patcher. Warning: field infotext in API payload not found in <modules.processing.StableDiffusionProcessingTxt2Img object at 0x00000291461FDAE0>. Error completing request Arguments: ('task(dxt3sj10ip0yn0l)', <gradio.routes.Request object at 0x0000029146278880>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, 1, 0, False, 1, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, {'enabled': True, 'model_type': 'FC', 'input_fg': array([[[ 0, 5, 1, 255], [ 4, 6, 4, 255], [ 1, 4, 1, 255], ..., [ 13, 8, 3, 255], [ 12, 7, 3, 255], [ 12, 9, 4, 255]],
[[ 1, 6, 1, 255], [ 0, 4, 0, 255], [ 0, 5, 0, 255], ..., [ 12, 7, 6, 255], [ 9, 9, 3, 255], *** [ 11, 8, 4, 255]],
[[ 2, 4, 1, 255], [ 3, 5, 1, 255], [ 1, 5, 1, 255], ..., [ 12, 8, 6, 255], [ 12, 9, 6, 255], *** [ 11, 8, 6, 255]],
*** ...,
[[ 6, 6, 3, 255], [ 5, 4, 3, 255], [ 4, 3, 2, 255], ..., [ 12, 9, 7, 255], [ 14, 11, 7, 255], *** [ 12, 9, 7, 255]],
[[ 4, 5, 1, 255], [ 3, 4, 0, 255], [ 3, 4, 1, 255], ..., [ 10, 8, 5, 255], [ 10, 8, 6, 255], *** [ 13, 10, 7, 255]],
[[ 3, 5, 3, 255], [ 4, 4, 2, 255], [ 5, 3, 3, 255], ..., [ 11, 6, 7, 255], [ 11, 6, 7, 255], [ 7, 5, 4, 255]]], dtype=uint8), 'uploaded_bg': None, 'bg_source_fc': 'None', 'bg_source_fbc': 'Use Background Image', 'remove_bg': True}, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "E:\ai_gh_repos\sd.webui_190\webui\modules\call_queue.py", line 57, in f res = list(func(args, kwargs)) File "E:\ai_gh_repos\sd.webui_190\webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "E:\ai_gh_repos\sd.webui_190\webui\modules\txt2img.py", line 109, in txt2img processed = processing.process_images(p) File "E:\ai_gh_repos\sd.webui_190\webui\modules\processing.py", line 845, in process_images res = process_images_inner(p) File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, *kwargs) File "E:\ai_gh_repos\sd.webui_190\webui\modules\processing.py", line 981, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\scripts\model_patcher_hook.py", line 67, in wrapped_sample_func patcher.patch_model() File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 399, in patch_model self._patch_weights() File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 388, in _patch_weights new_weight = weight_patch.apply(new_weight, key) File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 119, in apply return self._patch_diff(model_weight, key) File "E:\ai_gh_repos\sd.webui_190\webui\extensions\sd-webui-model-patcher\lib_modelpatcher\model_patcher.py", line 168, in _patch_diff return model_weight + self.alpha self.weight.to(model_weight.device) RuntimeError: Unsupported TypeMeta in ATen: class std::vector<unsigned long,class std::allocator > (please report this error)