openvinotoolkit / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
290 stars 45 forks source link

[Bug]: Instructions for Linux install are incorrect #81

Open n4mwd opened 1 year ago

n4mwd commented 1 year ago

Is there an existing issue for this?

What happened?

There are numerous errors in the instructions for Linux install.

  1. On Linux, python is called "python3" not "python".
  2. Executing "python -m venv sd_env" gives an error. Whereas "python3 -m venv sd_env" returns with nothing shown on the screen. It does generate a subdirectory called sd_env.
  3. Running the following commands seems to generate no errors: a) source sd_env/bin/activate b) git clone https://github.com/openvinotoolkit/stable-diffusion-webui.git c) cd stable-diffusion-webui d) export PYTORCH_TRACING_MODE=TORCHFX e) export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half"
  4. When running "./webui.sh" the first error is "Cannot locate TCMalloc (improves CPU memory usage) fatal: No names found, cannot describe anything."
  5. After a long download, it starts the web interface. The script "accelerate with openvino" is selected. "v1-interference.yaml" is selected. "v1-5-pruned-emaonly.safetensors" is selected. The text prompt "flower with a bee on it" is entered in the prompt box and Generate is clicked.
  6. At this point numerous errors are printed in konsole. Starting with: "[2023-11-24 16:51:21,717] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments"

"list index out of range Traceback (most recent call last): File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 200, in openvino_fx compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs) File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 426, in openvino_compile_cached_model om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype]) IndexError: list index out of range"

  1. From there it snowballs out of control and errors on top of other errors are displayed.

Steps to reproduce the problem

On mx linux, open konsole and enter the commands below as given in the instructions: "# Make sure Python version is 3.10+ python3 -m venv sd_env source sd_env/bin/activate git clone https://github.com/openvinotoolkit/stable-diffusion-webui.git cd stable-diffusion-webui

export PYTORCH_TRACING_MODE=TORCHFX export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half"

Launch the WebUI

./webui.sh "

Once the UI opens in the web page, select openvino from the scripts menu.

Enter a prompt like "flower with a bee on it".

SD crashes.

What should have happened?

I think this bug is related to incorrect or incomplete instructions.

Sysinfo

{ "Platform": "Linux-6.0.0-6mx-amd64-x86_64-with-glibc2.31", "Python": "3.9.2", "Version": "1.6.0", "Commit": "44006297e03a07f28505d54d6ba5fd55e0c1292d", "Script path": "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui", "Data path": "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui", "Extensions dir": "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/extensions", "Checksum": "9ad46f4d60c8a6c171a72e111c583061aaf3a02638b19d17638614e2cd7bfcf4", "Commandline": [ "launch.py", "--skip-torch-cuda-test", "--precision", "full", "--no-half" ], "Torch env info": { "torch_version": "2.0.1+cu118", "is_debug_build": "False", "cuda_compiled_version": "11.8", "gcc_version": null, "clang_version": null, "cmake_version": "version 3.27.7", "os": "Debian GNU/Linux 11 (bullseye) (x86_64)", "libc_version": "glibc-2.31", "python_version": "3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime)", "python_platform": "Linux-6.0.0-6mx-amd64-x86_64-with-glibc2.31", "is_cuda_available": "False", "cuda_runtime_version": null, "cuda_module_loading": "N/A", "nvidia_driver_version": null, "nvidia_gpu_models": null, "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.0", "torchsde==0.2.5", "torchvision==0.15.2+cu118" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture: x86_64", "CPU op-mode(s): 32-bit, 64-bit", "Byte Order: Little Endian", "Address sizes: 39 bits physical, 48 bits virtual", "CPU(s): 8", "On-line CPU(s) list: 0-7", "Thread(s) per core: 2", "Core(s) per socket: 4", "Socket(s): 1", "NUMA node(s): 1", "Vendor ID: GenuineIntel", "CPU family: 6", "Model: 140", "Model name: 11th Gen Intel(R) Core(TM) i5-11320H @ 3.20GHz", "Stepping: 2", "CPU MHz: 400.000", "CPU max MHz: 4500.0000", "CPU min MHz: 400.0000", "BogoMIPS: 6374.40", "Virtualization: VT-x", "L1d cache: 192 KiB", "L1i cache: 128 KiB", "L2 cache: 5 MiB", "L3 cache: 8 MiB", "NUMA node0 CPU(s): 0-7", "Vulnerability Itlb multihit: Not affected", "Vulnerability L1tf: Not affected", "Vulnerability Mds: Not affected", "Vulnerability Meltdown: Not affected", "Vulnerability Mmio stale data: Not affected", "Vulnerability Retbleed: Not affected", "Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl", "Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization", "Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence", "Vulnerability Srbds: Not affected", "Vulnerability Tsx async abort: Not affected", "Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities" ] }, "Exceptions": [ { "exception": "openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File \"/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/resnet.py\", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'}\n\nWhile executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})\nOriginal traceback:\n File \"/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/resnet.py\", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n\n\nSet torch._dynamo.config.verbose=True for more information\n\n\nYou can suppress this exception and fall back to eager by setting:\n torch._dynamo.config.suppress_errors = True\n", "traceback": [ [ "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py, line 57, f", "res = list(func(*args, kwargs))" ], [ "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py, line 36, f", "res = func(*args, *kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/txt2img.py, line 52, txt2img", "processed = modules.scripts.scripts_txt2img.run(p, args)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/scripts.py, line 601, run", "processed = script.run(p, script_args)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py, line 1228, run", "processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py, line 979, process_images_openvino", "output = shared.sd_diffusers_model(" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/utils/_contextlib.py, line 115, decorate_context", "return func(args, kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py, line 840, call", "noise_pred = self.unet(" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py, line 82, forward", "return self.dynamo_ctx(self._orig_mod.forward)(*args, *kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py, line 209, _fn", "return fn(args, kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py, line 932, forward", "emb = self.time_embedding(t_emb, timestep_cond)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py, line 1066, ", "sample, res_samples = downsample_block(" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py, line 1159, forward", "hidden_states = resnet(hidden_states, temb, scale=lora_scale)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py, line 337, catch_errors", "return callback(frame, cache_size, hooks)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py, line 404, _convert_frame", "result = inner_convert(frame, cache_size, hooks)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py, line 104, _fn", "return fn(args, kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py, line 262, _convert_frame_assert", "return _compile(" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/utils.py, line 163, time_wrapper", "r = func(*args, *kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py, line 324, _compile", "out_code = transform_code_object(code, transform)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py, line 445, transform_code_object", "transformations(instructions, code_options)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py, line 311, transform", "tracer.run()" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py, line 1726, run", "super().run()" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py, line 576, run", "and self.step()" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py, line 540, step", "getattr(self, inst.opname)(inst)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py, line 372, wrapper", "self.output.compile_subgraph(self, reason=reason)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/output_graph.py, line 541, compile_subgraph", "self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/output_graph.py, line 588, compile_and_call_fx_graph", "compiled_fn = self.call_user_compiler(gm)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/utils.py, line 163, time_wrapper", "r = func(args, *kwargs)" ], [ "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/output_graph.py, line 675, call_user_compiler", "raise BackendCompilerFailed(self.compiler_fn, e) from e" ] ] } ], "CPU": { "model": "", "count logical": 8, "count physical": 4 }, "RAM": { "total": "15GB", "used": "7GB", "free": "1GB", "active": "6GB", "inactive": "7GB", "buffers": "46MB", "cached": "7GB", "shared": "423MB" }, "Extensions": [], "Inactive extensions": [], "Environment": { "COMMANDLINE_ARGS": "--skip-torch-cuda-test --precision full --no-half", "GIT": "git", "GRADIO_ANALYTICS_ENABLED": "False" }, "Config": { "samples_save": true, "samples_format": "png", "samples_filename_pattern": "", "save_images_add_number": true, "grid_save": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "grid_prevent_empty_spots": false, "grid_zip_filename_pattern": "", "n_rows": -1, "font": "", "grid_text_active_color": "#000000", "grid_text_inactive_color": "#999999", "grid_background_color": "#ffffff", "enable_pnginfo": true, "save_txt": false, "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "jpeg_quality": 80, "webp_lossless": false, "export_for_4chan": true, "img_downscale_threshold": 4.0, "target_side_length": 4000, "img_max_size_mp": 200, "use_original_name_batch": true, "use_upscaler_name_as_suffix": false, "save_selected_only": true, "save_init_img": false, "temp_dir": "", "clean_temp_dir_at_start": false, "save_incomplete_images": false, "outdir_samples": "", "outdir_txt2img_samples": "outputs/txt2img-images", "outdir_img2img_samples": "outputs/img2img-images", "outdir_extras_samples": "outputs/extras-images", "outdir_grids": "", "outdir_txt2img_grids": "outputs/txt2img-grids", "outdir_img2img_grids": "outputs/img2img-grids", "outdir_save": "log/images", "outdir_init_images": "outputs/init-images", "save_to_dirs": true, "grid_save_to_dirs": true, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "realesrgan_enabled_models": [ "R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B" ], "upscaler_for_img2img": null, "face_restoration": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "face_restoration_unload": false, "auto_launch_browser": "Local", "show_warnings": false, "show_gradio_deprecation_warnings": true, "memmon_poll_rate": 8, "samples_log_stdout": false, "multiple_tqdm": true, "print_hypernet_extra": false, "list_hidden_files": true, "disable_mmap_load_safetensors": false, "hide_ldm_prints": true, "api_enable_requests": true, "api_forbid_local_requests": true, "api_useragent": "", "unload_models_when_training": false, "pin_memory": false, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 500, "training_xattention_optimizations": false, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120, "sd_model_checkpoint": "v1-5-pruned-emaonly.safetensors [6ce0161689]", "sd_checkpoints_limit": 1, "sd_checkpoints_keep_in_cpu": true, "sd_checkpoint_cache": 0, "sd_unet": "Automatic", "enable_quantization": false, "enable_emphasis": true, "enable_batch_seeds": true, "comma_padding_backtrack": 20, "CLIP_stop_at_last_layers": 1, "upcast_attn": false, "randn_source": "GPU", "tiling": false, "hires_fix_refiner_pass": "second pass", "sdxl_crop_top": 0, "sdxl_crop_left": 0, "sdxl_refiner_low_aesthetic_score": 2.5, "sdxl_refiner_high_aesthetic_score": 6.0, "sd_vae_explanation": "VAE is a neural network that transforms a standard RGB\nimage into latent space representation and back. Latent space representation is what stable diffusion is working on during sampling\n(i.e. when the progress bar is between empty and full). For txt2img, VAE is used to create a resulting image after the sampling is finished.\nFor img2img, VAE is used to process user's input image before the sampling, and to create an image after sampling.", "sd_vae_checkpoint_cache": 0, "sd_vae": "Automatic", "sd_vae_overrides_per_model_preferences": true, "auto_vae_precision": true, "sd_vae_encode_method": "Full", "sd_vae_decode_method": "Full", "inpainting_mask_weight": 1.0, "initial_noise_multiplier": 1.0, "img2img_extra_noise": 0.0, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "img2img_editor_height": 720, "img2img_sketch_default_brush_color": "#ffffff", "img2img_inpaint_mask_brush_color": "#ffffff", "img2img_inpaint_sketch_default_brush_color": "#ffffff", "return_mask": false, "return_mask_composite": false, "cross_attention_optimization": "Automatic", "s_min_uncond": 0.0, "token_merging_ratio": 0.0, "token_merging_ratio_img2img": 0.0, "token_merging_ratio_hr": 0.0, "pad_cond_uncond": false, "persistent_cond_cache": true, "batch_cond_uncond": true, "use_old_emphasis_implementation": false, "use_old_karras_scheduler_sigmas": false, "no_dpmpp_sde_batch_determinism": false, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "hires_fix_use_firstpass_conds": false, "use_old_scheduling": false, "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": false, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 24, "interrogate_clip_max_length": 48, "interrogate_clip_dict_limit": 1500, "interrogate_clip_skip_categories": [], "interrogate_deepbooru_score_threshold": 0.5, "deepbooru_sort_alpha": true, "deepbooru_use_spaces": true, "deepbooru_escape": true, "deepbooru_filter_tags": "", "extra_networks_show_hidden_directories": true, "extra_networks_hidden_models": "When searched", "extra_networks_default_multiplier": 1.0, "extra_networks_card_width": 0, "extra_networks_card_height": 0, "extra_networks_card_text_scale": 1.0, "extra_networks_card_show_desc": true, "extra_networks_add_text_separator": " ", "ui_extra_networks_tab_reorder": "", "textual_inversion_print_at_load": false, "textual_inversion_add_hashes_to_infotext": true, "sd_hypernetwork": "None", "localization": "None", "gradio_theme": "Default", "gradio_themes_cache": true, "gallery_height": "", "return_grid": true, "do_not_show_images": false, "send_seed": true, "send_size": true, "js_modal_lightbox": true, "js_modal_lightbox_initially_zoomed": true, "js_modal_lightbox_gamepad": false, "js_modal_lightbox_gamepad_repeat": 250, "show_progress_in_title": true, "samplers_in_dropdown": true, "dimensions_and_batch_together": true, "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\/!?%^;:{}=`~()", "keyedit_move": true, "quicksettings_list": [ "sd_model_checkpoint" ], "ui_tab_order": [], "hidden_tabs": [], "ui_reorder_list": [], "hires_fix_show_sampler": false, "hires_fix_show_prompts": false, "disable_token_counters": false, "add_model_hash_to_info": true, "add_model_name_to_info": true, "add_user_name_to_info": false, "add_version_to_infotext": true, "disable_weights_auto_swap": true, "infotext_styles": "Apply if any", "show_progressbar": true, "live_previews_enable": true, "live_previews_image_format": "png", "show_progress_grid": true, "show_progress_every_n_steps": 10, "show_progress_type": "Approx NN", "live_preview_allow_lowvram_full": false, "live_preview_content": "Prompt", "live_preview_refresh_period": 1000, "live_preview_fast_interrupt": false, "hide_samplers": [], "eta_ddim": 0.0, "eta_ancestral": 1.0, "ddim_discretize": "uniform", "s_churn": 0.0, "s_tmin": 0.0, "s_tmax": 0.0, "s_noise": 1.0, "k_sched_type": "Automatic", "sigma_min": 0.0, "sigma_max": 0.0, "rho": 0.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "sgm_noise_multiplier": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "postprocessing_enable_in_main_ui": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "disabled_extensions": [], "disable_all_extensions": "none", "restore_config_state_file": "", "sd_checkpoint_hash": "6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa" }, "Startup": { "total": 10.942436933517456, "records": { "initial startup": 0.0007312297821044922, "prepare environment/checks": 3.4332275390625e-05, "prepare environment/git version info": 0.0332181453704834, "prepare environment/torch GPU test": 0.0033102035522460938, "prepare environment/clone repositores": 0.04341244697570801, "prepare environment/run extensions installers": 0.001992464065551758, "prepare environment": 0.1961209774017334, "launcher": 0.004157304763793945, "import torch": 4.6775078773498535, "import gradio": 0.9120547771453857, "setup paths": 1.0277633666992188, "import ldm": 0.011285781860351562, "import sgm": 6.67572021484375e-06, "initialize shared": 0.12979936599731445, "other imports": 0.7936112880706787, "opts onchange": 0.0003151893615722656, "setup SD model": 0.0001964569091796875, "setup codeformer": 0.12697243690490723, "setup gfpgan": 0.03609609603881836, "set samplers": 3.170967102050781e-05, "list extensions": 6.508827209472656e-05, "restore config state file": 4.5299530029296875e-06, "list SD models": 0.0056018829345703125, "list localizations": 0.0005822181701660156, "load scripts/custom_code.py": 0.0041234493255615234, "load scripts/img2imgalt.py": 0.0012195110321044922, "load scripts/loopback.py": 0.0004875659942626953, "load scripts/openvino_accelerate.py": 1.9242660999298096, "load scripts/outpainting_mk_2.py": 0.0007462501525878906, "load scripts/poor_mans_outpainting.py": 0.001806020736694336, "load scripts/postprocessing_codeformer.py": 0.0009465217590332031, "load scripts/postprocessing_gfpgan.py": 0.00029969215393066406, "load scripts/postprocessing_upscale.py": 0.0006885528564453125, "load scripts/prompt_matrix.py": 0.0003781318664550781, "load scripts/prompts_from_file.py": 0.0003795623779296875, "load scripts/refiner.py": 0.0006880760192871094, "load scripts/sd_upscale.py": 0.00032448768615722656, "load scripts/seed.py": 0.0006184577941894531, "load scripts/xyz_grid.py": 0.0016472339630126953, "load scripts/ldsr_model.py": 0.19625377655029297, "load scripts/lora_script.py": 0.16991043090820312, "load scripts/scunet_model.py": 0.03408503532409668, "load scripts/swinir_model.py": 0.030275344848632812, "load scripts/hotkey_config.py": 0.0006921291351318359, "load scripts/extra_options_section.py": 0.0007660388946533203, "load scripts": 2.3706605434417725, "load upscalers": 0.01265859603881836, "refresh VAE": 0.0011224746704101562, "refresh textual inversion templates": 0.0005328655242919922, "scripts list_optimizers": 0.0002346038818359375, "scripts list_unets": 9.298324584960938e-06, "reload hypernetworks": 0.00530552864074707, "initialize extra networks": 0.01580810546875, "scripts before_ui_callback": 0.0011720657348632812, "create ui": 0.36687636375427246, "gradio launch": 0.33063817024230957, "add APIs": 0.02945566177368164, "app_started_callback/lora_script.py": 0.0002129077911376953, "app_started_callback": 0.00021505355834960938 } }, "Packages": [ "absl-py==2.0.0", "accelerate==0.21.0", "addict==2.4.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohttp==3.9.0", "aiosignal==1.3.1", "altair==5.1.2", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "async-timeout==4.0.3", "attrs==23.1.0", "basicsr==1.4.2", "beautifulsoup4==4.12.2", "blendmodes==2023", "boltons==23.1.1", "cachetools==5.3.2", "certifi==2023.11.17", "charset-normalizer==3.3.2", "clean-fid==0.1.35", "click==8.1.7", "clip==1.0", "cmake==3.27.7", "contourpy==1.2.0", "cycler==0.12.1", "deprecation==2.1.0", "diffusers==0.23.0", "einops==0.4.1", "exceptiongroup==1.2.0", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.3.1", "filelock==3.13.1", "filterpy==1.4.5", "fonttools==4.45.1", "frozenlist==1.4.0", "fsspec==2023.10.0", "ftfy==6.1.3", "future==0.18.3", "gdown==4.7.1", "gfpgan==1.3.8", "gitdb==4.0.11", "gitpython==3.1.37", "google-auth-oauthlib==1.1.0", "google-auth==2.23.4", "gradio-client==0.5.0", "gradio==3.41.2", "grpcio==1.59.3", "h11==0.12.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.19.4", "idna==3.5", "imageio==2.33.0", "importlib-metadata==6.8.0", "importlib-resources==6.1.1", "inflection==0.5.1", "invisible-watermark==0.2.0", "jinja2==3.1.2", "jsonmerge==1.8.0", "jsonschema-specifications==2023.11.1", "jsonschema==4.20.0", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy-loader==0.3", "lightning-utilities==0.10.0", "lit==17.0.5", "llvmlite==0.41.1", "lmdb==1.4.1", "lpips==0.1.4", "markdown==3.5.1", "markupsafe==2.1.3", "matplotlib==3.8.2", "mpmath==1.3.0", "multidict==6.0.4", "networkx==3.2.1", "numba==0.58.1", "numpy==1.23.5", "oauthlib==3.2.2", "omegaconf==2.2.3", "open-clip-torch==2.20.0", "opencv-python==4.8.1.78", "openvino-telemetry==2023.2.1", "openvino==2023.2.0", "orjson==3.9.10", "packaging==23.2", "pandas==2.1.3", "piexif==1.1.3", "pillow==10.0.1", "pip==20.3.4", "pkg-resources==0.0.0", "platformdirs==4.0.0", "protobuf==3.20.0", "psutil==5.9.5", "pyasn1-modules==0.3.0", "pyasn1==0.5.1", "pydantic==1.10.13", "pydub==0.25.1", "pyparsing==3.1.1", "pysocks==1.7.1", "python-dateutil==2.8.2", "python-multipart==0.0.6", "pytorch-lightning==1.9.4", "pytz==2023.3.post1", "pywavelets==1.5.0", "pyyaml==6.0.1", "realesrgan==0.3.0", "referencing==0.31.0", "regex==2023.10.3", "requests-oauthlib==1.3.1", "requests==2.31.0", "resize-right==0.0.2", "rpds-py==0.13.1", "rsa==4.9", "safetensors==0.3.1", "scikit-image==0.21.0", "scipy==1.11.4", "semantic-version==2.10.0", "sentencepiece==0.1.99", "setuptools==44.1.1", "six==1.16.0", "smmap==5.0.1", "sniffio==1.3.0", "soupsieve==2.5", "starlette==0.26.1", "sympy==1.12", "tb-nightly==2.16.0a20231124", "tensorboard-data-server==0.7.2", "tf-keras-nightly==2.16.0.dev2023112410", "tifffile==2023.9.26", "timm==0.9.2", "tokenizers==0.13.3", "tomesd==0.1.3", "tomli==2.0.1", "toolz==0.12.0", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.0", "torchsde==0.2.5", "torchvision==0.15.2+cu118", "tqdm==4.66.1", "trampoline==0.1.2", "transformers==4.30.2", "triton==2.0.0", "typing-extensions==4.8.0", "tzdata==2023.3", "urllib3==2.1.0", "uvicorn==0.24.0.post1", "wcwidth==0.2.12", "websockets==11.0.3", "werkzeug==3.0.1", "yapf==0.40.2", "yarl==1.9.3", "zipp==3.17.0" ] }

What browsers do you use to access the UI ?

No response

Console logs

(sd_env) dennis@mx:~/Downloads/stable-diffusion-webui/stable-diffusion-webui$ ./webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on dennis user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/dennis/Downloads/stable-diffusion-webui/sd_env
################################################################

################################################################
Launching launch.py...
################################################################
Cannot locate TCMalloc (improves CPU memory usage)
fatal: No names found, cannot describe anything.
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110]
Version: 1.6.0
Commit hash: 44006297e03a07f28505d54d6ba5fd55e0c1292d
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [6ce0161689] from /home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: /home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/configs/v1-inference.yaml
Startup time: 10.9s (prepare environment: 0.2s, import torch: 4.7s, import gradio: 0.9s, setup paths: 1.0s, initialize shared: 0.1s, other imports: 0.8s, setup codeformer: 0.1s, load scripts: 2.4s, create ui: 0.4s, gradio launch: 0.3s).
Applying attention optimization: InvokeAI... done.
Model loaded in 6.8s (load weights from disk: 0.6s, create model: 0.5s, apply weights to model: 5.5s).
{}
Loading weights [6ce0161689] from /home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
OpenVINO Script:  created model from config : /home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/configs/v1-inference.yaml
  0%|                                                                                                                                            | 0/20 [00:00<?, ?it/s][2023-11-24 16:51:21,717] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:21,788] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:21,819] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:21,847] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:21,915] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:21,956] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:21,983] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:22,079] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:22,106] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/conv.py <function Conv2d.forward at 0x7f0e5ab16280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:22,190] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-11-24 16:51:22,232] torch._dynamo.symbolic_convert: [WARNING] /home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f0e5ab09a60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
list index out of range
Traceback (most recent call last):
  File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 200, in openvino_fx
    compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
  File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 426, in openvino_compile_cached_model
    om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/passes/shape_prop.py", line 147, in run_node
    result = super().run_node(n)
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/interpreter.py", line 177, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/interpreter.py", line 294, in call_module
    return submod(*args, **kwargs)
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 459, in network_GroupNorm_forward
    return originals.GroupNorm_forward(self, input)
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/normalization.py", line 273, in forward
    return F.group_norm(
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/functional.py", line 2526, in group_norm
    return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/overrides.py", line 1534, in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_inductor/overrides.py", line 38, in __torch_function__
    return func(*args, **kwargs)
  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/functional.py", line 2530, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
  0%|                                                                                                                                            | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(kdx0qhgm2xob8co)', 'flower with bee on it', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x7f0e20bf3d60>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 200, in openvino_fx
        compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 426, in openvino_compile_cached_model
        om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
    IndexError: list index out of range

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/passes/shape_prop.py", line 147, in run_node
        result = super().run_node(n)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/interpreter.py", line 177, in run_node
        return getattr(self, n.op)(n.target, args, kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/interpreter.py", line 294, in call_module
        return submod(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 459, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/normalization.py", line 273, in forward
        return F.group_norm(
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/functional.py", line 2526, in group_norm
        return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/overrides.py", line 1534, in handle_torch_function
        result = mode.__torch_function__(public_api, types, args, kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_inductor/overrides.py", line 38, in __torch_function__
        return func(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler
        compiled_fn = compiler_fn(gm, self.fake_example_inputs())
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper
        compiled_gm = compiler_fn(gm, example_inputs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/backends/common.py", line 107, in wrapper
        return fn(model, inputs, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 233, in openvino_fx
        return compile_fx(subgraph, example_inputs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 415, in compile_fx
        model_ = overrides.fuse_fx(model_, example_inputs_)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_inductor/overrides.py", line 96, in fuse_fx
        gm = mkldnn_fuse_fx(gm, example_inputs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_inductor/mkldnn.py", line 509, in mkldnn_fuse_fx
        ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/passes/shape_prop.py", line 185, in propagate
        return super().run(*args)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/interpreter.py", line 136, in run
        self.env[node] = self.run_node(node)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/fx/passes/shape_prop.py", line 152, in run_node
        raise RuntimeError(
    RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/modules/scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 1228, in run
        processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
      File "/home/dennis/Downloads/stable-diffusion-webui/stable-diffusion-webui/scripts/openvino_accelerate.py", line 979, in process_images_openvino
        output = shared.sd_diffusers_model(
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 840, in __call__
        noise_pred = self.unet(
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
        return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
        return fn(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 932, in forward
        emb = self.time_embedding(t_emb, timestep_cond)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 1066, in <graph break in forward>
        sample, res_samples = downsample_block(
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py", line 1159, in forward
        hidden_states = resnet(hidden_states, temb, scale=lora_scale)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
        return callback(frame, cache_size, hooks)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
        result = inner_convert(frame, cache_size, hooks)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
        return fn(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
        return _compile(
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
        out_code = transform_code_object(code, transform)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
        transformations(instructions, code_options)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
        tracer.run()
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
        super().run()
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
        and self.step()
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
        getattr(self, inst.opname)(inst)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 372, in wrapper
        self.output.compile_subgraph(self, reason=reason)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 541, in compile_subgraph
        self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 588, in compile_and_call_fx_graph
        compiled_fn = self.call_user_compiler(gm)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 675, in call_user_compiler
        raise BackendCompilerFailed(self.compiler_fn, e) from e
    torch._dynamo.exc.BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "/home/dennis/Downloads/stable-diffusion-webui/sd_env/lib/python3.9/site-packages/diffusers/models/resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    Set torch._dynamo.config.verbose=True for more information

    You can suppress this exception and fall back to eager by setting:
        torch._dynamo.config.suppress_errors = True

---

Additional information

Something is incorrect in the install instructions. The program seems to install ok, but crashes as soon as you try to use it.

hashFactory commented 1 year ago

I ran into the same issue.

I managed to get it working by running these two commands:

export USE_OPENVINO=1
pip install torch==2.1.0 torchvision==0.16.0

I think the installation instructions need to be updated to reflect the latest commit 4400629

chenxiex commented 11 months ago

I ran into the same issue.

I managed to get it working by running these two commands:

export USE_OPENVINO=1
pip install torch==2.1.0 torchvision==0.16.0

I think the installation instructions need to be updated to reflect the latest commit 4400629

I found this solution useful with python3.10.6. But python3.9 may still throw errors.

MorrisLu-Taipei commented 10 months ago

Thanks fo your all, finally it works now, A770 16G in WSL2 w/ubuntu 22.04 works fine BUT pretty slow 512*512 5 it/s takes 1min 48 sec -what I install sudo apt install libtcmalloc-minimal4 >> Cannot locate TCMalloc (improves CPU memory usage) pip install opencv-python-headless >> ImportError: libGL.so export USE_OPENVINO=1 pip install torch==2.1.0 torchvision==0.16.0