AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.68k stars 27.03k forks source link

[Bug]: #14140

Closed seddouguim closed 1 year ago

seddouguim commented 1 year ago

Is there an existing issue for this?

What happened?

I am running out of VRAM when I am simply trying to inpaint a small area (batch: 1, batch size: 1) with a 512x512 resolution. I am running automatic1111 on an AWS EC2 instance with 16gb of VRAM so I don't understand why I run out of memory. It seems like pytorch is using too much memory but why?

Steps to reproduce the problem

  1. Go to img2img
  2. Press inpaint sketch
  3. Select area to inpaint
  4. generate

What should have happened?

I don't think I should run out of memory for such a trivial task.

Sysinfo

{ "Platform": "Linux-5.15.0-1049-aws-x86_64-with-glibc2.31", "Python": "3.10.9", "Version": "v1.6.0-2-g4afaaf8a", "Commit": "4afaaf8a020c1df457bcf7250cb1c7f609699fa7", "Script path": "/home/ubuntu/stable-diffusion-webui", "Data path": "/home/ubuntu/stable-diffusion-webui", "Extensions dir": "/home/ubuntu/stable-diffusion-webui/extensions", "Checksum": "55433c949b419a126792b8bcf9c87ed875386fb27cb374d8e6d0e6c00ee5c4c5", "Commandline": [ "launch.py", "--share", "--enable-insecure-extension-access", "--no-half" ], "Torch env info": { "torch_version": "2.0.1+cu118", "is_debug_build": "False", "cuda_compiled_version": "11.8", "gcc_version": "(Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0", "clang_version": null, "cmake_version": "version 3.27.7", "os": "Ubuntu 20.04.6 LTS (x86_64)", "libc_version": "glibc-2.31", "python_version": "3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:20:04) [GCC 11.3.0] (64-bit runtime)", "python_platform": "Linux-5.15.0-1049-aws-x86_64-with-glibc2.31", "is_cuda_available": "True", "cuda_runtime_version": "12.1.105", "cuda_module_loading": "LAZY", "nvidia_driver_version": "535.104.12", "nvidia_gpu_models": "GPU 0: Tesla T4", "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.0", "torchsde==0.2.5", "torchvision==0.15.2+cu118" ], "conda_packages": "", "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture: x86_64", "CPU op-mode(s): 32-bit, 64-bit", "Byte Order: Little Endian", "Address sizes: 46 bits physical, 48 bits virtual", "CPU(s): 4", "On-line CPU(s) list: 0-3", "Thread(s) per core: 2", "Core(s) per socket: 2", "Socket(s): 1", "NUMA node(s): 1", "Vendor ID: GenuineIntel", "CPU family: 6", "Model: 85", "Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz", "Stepping: 7", "CPU MHz: 2499.998", "BogoMIPS: 4999.99", "Hypervisor vendor: KVM", "Virtualization type: full", "L1d cache: 64 KiB", "L1i cache: 64 KiB", "L2 cache: 2 MiB", "L3 cache: 35.8 MiB", "NUMA node0 CPU(s): 0-3", "Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status", "Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported", "Vulnerability L1tf: Mitigation; PTE Inversion", "Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown", "Vulnerability Meltdown: Mitigation; PTI", "Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown", "Vulnerability Retbleed: Vulnerable", "Vulnerability Spec rstack overflow: Not affected", "Vulnerability Spec store bypass: Vulnerable", "Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and user pointer sanitization", "Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected", "Vulnerability Srbds: Not affected", "Vulnerability Tsx async abort: Not affected", "Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni" ] }, "Exceptions": [ { "exception": "CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 14.75 GiB total capacity; 14.22 GiB already allocated; 63.06 MiB free; 14.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF", "traceback": [ [ "/home/ubuntu/stable-diffusion-webui/modules/call_queue.py, line 57, f", "res = list(func(*args, kwargs))" ], [ "/home/ubuntu/stable-diffusion-webui/modules/call_queue.py, line 36, f", "res = func(*args, *kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/img2img.py, line 208, img2img", "processed = process_images(p)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/processing.py, line 732, process_images", "res = process_images_inner(p)" ], [ "/home/ubuntu/stable-diffusion-webui/extensions/controlnet/scripts/batch_hijack.py, line 42, processing_process_images_hijack", "return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/processing.py, line 867, process_images_inner", "samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/processing.py, line 1528, sample", "samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py, line 188, sample_img2img", "samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_common.py, line 261, launch_sampling", "return func()" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py, line 188, ", "samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py, line 115, decorate_context", "return func(*args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py, line 594, sample_dpmpp_2m", "denoised = model(x, sigmas[i] * s_in, *extra_args)" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py, line 169, forward", "x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py, line 112, forward", "eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py, line 138, get_eps", "return self.inner_model.apply_model(args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_models_xl.py, line 37, apply_model", "return self.model(x, t, cond)" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_hijack_utils.py, line 17, ", "setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs))" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_hijack_utils.py, line 28, call", "return self.orig_func(*args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/wrappers.py, line 28, forward", "return self.diffusion_model(" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/openaimodel.py, line 993, forward", "h = module(h, emb, context)" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/openaimodel.py, line 100, forward", "x = layer(x, context)" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py, line 627, forward", "x = block(x, context=context[i])" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(args, kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py, line 459, forward", "return checkpoint(" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/util.py, line 165, checkpoint", "return CheckpointFunction.apply(func, len(inputs), args)" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/function.py, line 506, apply", "return super().apply(args, kwargs) # type: ignore[misc]" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/util.py, line 182, forward", "output_tensors = ctx.run_function(ctx.input_tensors)" ], [ "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py, line 467, _forward", "self.attn1(" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1501, _call_impl", "return forward_call(args, **kwargs)" ], [ "/home/ubuntu/stable-diffusion-webui/modules/sd_hijack_optimizations.py, line 266, split_cross_attention_forward", "s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)" ], [ "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/functional.py, line 378, einsum", "return _VF.einsum(equation, operands) # type: ignore[attr-defined]" ] ] } ], "CPU": { "model": "x86_64", "count logical": 4, "count physical": 2 }, "RAM": { "total": "15GB", "used": "14GB", "free": "216MB", "active": "332MB", "inactive": "14GB", "buffers": "42MB", "cached": "1GB", "shared": "15MB" }, "Extensions": [ { "name": "adetailer", "path": "/home/ubuntu/stable-diffusion-webui/extensions/adetailer", "version": "6b41b3db", "branch": "main", "remote": "https://github.com/Bing-su/adetailer.git" }, { "name": "controlnet", "path": "/home/ubuntu/stable-diffusion-webui/extensions/controlnet", "version": "c1f3d6f8", "branch": "main", "remote": "https://github.com/Mikubill/sd-webui-controlnet" }, { "name": "ultimate-upscale-for-automatic1111", "path": "/home/ubuntu/stable-diffusion-webui/extensions/ultimate-upscale-for-automatic1111", "version": "728ffcec", "branch": "master", "remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git" } ], "Inactive extensions": [ { "name": "canvas-zoom", "path": "/home/ubuntu/stable-diffusion-webui/extensions/canvas-zoom", "version": "36762c01", "branch": "main", "remote": "https://github.com/richrobber2/canvas-zoom.git" }, { "name": "infinite-zoom-automatic1111-webui", "path": "/home/ubuntu/stable-diffusion-webui/extensions/infinite-zoom-automatic1111-webui", "version": "d6461e7d", "branch": "main", "remote": "https://github.com/v8hid/infinite-zoom-automatic1111-webui.git" } ], "Environment": { "COMMANDLINE_ARGS": "--share --enable-insecure-extension-access --no-half", "GIT": "git", "GRADIO_ANALYTICS_ENABLED": "False" }, "Config": { "samples_save": true, "samples_format": "png", "samples_filename_pattern": "", "save_images_add_number": true, "grid_save": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "grid_prevent_empty_spots": false, "grid_zip_filename_pattern": "", "n_rows": -1, "font": "", "grid_text_active_color": "#000000", "grid_text_inactive_color": "#999999", "grid_background_color": "#ffffff", "enable_pnginfo": true, "save_txt": false, "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "jpeg_quality": 80, "webp_lossless": false, "export_for_4chan": true, "img_downscale_threshold": 4.0, "target_side_length": 4000, "img_max_size_mp": 200, "use_original_name_batch": true, "use_upscaler_name_as_suffix": false, "save_selected_only": true, "save_init_img": false, "temp_dir": "", "clean_temp_dir_at_start": false, "save_incomplete_images": false, "outdir_samples": "", "outdir_txt2img_samples": "outputs/txt2img-images", "outdir_img2img_samples": "outputs/img2img-images", "outdir_extras_samples": "outputs/extras-images", "outdir_grids": "", "outdir_txt2img_grids": "outputs/txt2img-grids", "outdir_img2img_grids": "outputs/img2img-grids", "outdir_save": "log/images", "outdir_init_images": "outputs/init-images", "save_to_dirs": true, "grid_save_to_dirs": true, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "realesrgan_enabled_models": [ "R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B" ], "upscaler_for_img2img": null, "face_restoration": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "face_restoration_unload": false, "auto_launch_browser": "Local", "show_warnings": false, "show_gradio_deprecation_warnings": true, "memmon_poll_rate": 8, "samples_log_stdout": false, "multiple_tqdm": true, "print_hypernet_extra": false, "list_hidden_files": true, "disable_mmap_load_safetensors": false, "hide_ldm_prints": true, "api_enable_requests": true, "api_forbid_local_requests": true, "api_useragent": "", "unload_models_when_training": false, "pin_memory": false, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 500, "training_xattention_optimizations": false, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120, "sd_model_checkpoint": "juggernautXL_version6Rundiffusion.safetensors [1fe6c7ec54]", "sd_checkpoints_limit": 1, "sd_checkpoints_keep_in_cpu": true, "sd_checkpoint_cache": 0, "sd_unet": "Automatic", "enable_quantization": false, "enable_emphasis": true, "enable_batch_seeds": true, "comma_padding_backtrack": 20, "CLIP_stop_at_last_layers": 1, "upcast_attn": false, "randn_source": "GPU", "tiling": false, "hires_fix_refiner_pass": "second pass", "sdxl_crop_top": 0, "sdxl_crop_left": 0, "sdxl_refiner_low_aesthetic_score": 2.5, "sdxl_refiner_high_aesthetic_score": 6.0, "sd_vae_explanation": "VAE is a neural network that transforms a standard RGB\nimage into latent space representation and back. Latent space representation is what stable diffusion is working on during sampling\n(i.e. when the progress bar is between empty and full). For txt2img, VAE is used to create a resulting image after the sampling is finished.\nFor img2img, VAE is used to process user's input image before the sampling, and to create an image after sampling.", "sd_vae_checkpoint_cache": 0, "sd_vae": "Automatic", "sd_vae_overrides_per_model_preferences": true, "auto_vae_precision": true, "sd_vae_encode_method": "Full", "sd_vae_decode_method": "Full", "inpainting_mask_weight": 1.0, "initial_noise_multiplier": 1.0, "img2img_extra_noise": 0.0, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "img2img_editor_height": 720, "img2img_sketch_default_brush_color": "#ffffff", "img2img_inpaint_mask_brush_color": "#ffffff", "img2img_inpaint_sketch_default_brush_color": "#ffffff", "return_mask": false, "return_mask_composite": false, "cross_attention_optimization": "Automatic", "s_min_uncond": 0.0, "token_merging_ratio": 0.0, "token_merging_ratio_img2img": 0.0, "token_merging_ratio_hr": 0.0, "pad_cond_uncond": false, "persistent_cond_cache": true, "batch_cond_uncond": true, "use_old_emphasis_implementation": false, "use_old_karras_scheduler_sigmas": false, "no_dpmpp_sde_batch_determinism": false, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "hires_fix_use_firstpass_conds": false, "use_old_scheduling": false, "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": false, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 24, "interrogate_clip_max_length": 48, "interrogate_clip_dict_limit": 1500, "interrogate_clip_skip_categories": [], "interrogate_deepbooru_score_threshold": 0.5, "deepbooru_sort_alpha": true, "deepbooru_use_spaces": true, "deepbooru_escape": true, "deepbooru_filter_tags": "", "extra_networks_show_hidden_directories": true, "extra_networks_hidden_models": "When searched", "extra_networks_default_multiplier": 1.0, "extra_networks_card_width": 0, "extra_networks_card_height": 0, "extra_networks_card_text_scale": 1.0, "extra_networks_card_show_desc": true, "extra_networks_add_text_separator": " ", "ui_extra_networks_tab_reorder": "", "textual_inversion_print_at_load": false, "textual_inversion_add_hashes_to_infotext": true, "sd_hypernetwork": "None", "localization": "None", "gradio_theme": "Default", "gradio_themes_cache": true, "gallery_height": "", "return_grid": true, "do_not_show_images": false, "send_seed": true, "send_size": true, "js_modal_lightbox": true, "js_modal_lightbox_initially_zoomed": true, "js_modal_lightbox_gamepad": false, "js_modal_lightbox_gamepad_repeat": 250, "show_progress_in_title": true, "samplers_in_dropdown": true, "dimensions_and_batch_together": true, "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\/!?%^*;:{}=`~()", "keyedit_move": true, "quicksettings_list": [ "sd_model_checkpoint" ], "ui_tab_order": [], "hidden_tabs": [], "ui_reorder_list": [], "hires_fix_show_sampler": false, "hires_fix_show_prompts": false, "disable_token_counters": false, "add_model_hash_to_info": true, "add_model_name_to_info": true, "add_user_name_to_info": false, "add_version_to_infotext": true, "disable_weights_auto_swap": true, "infotext_styles": "Apply if any", "show_progressbar": true, "live_previews_enable": true, "live_previews_image_format": "png", "show_progress_grid": true, "show_progress_every_n_steps": 10, "show_progress_type": "Approx NN", "live_preview_allow_lowvram_full": false, "live_preview_content": "Prompt", "live_preview_refresh_period": 1000, "live_preview_fast_interrupt": false, "hide_samplers": [], "eta_ddim": 0.0, "eta_ancestral": 1.0, "ddim_discretize": "uniform", "s_churn": 0.0, "s_tmin": 0.0, "s_tmax": 0.0, "s_noise": 1.0, "k_sched_type": "Automatic", "sigma_min": 0.0, "sigma_max": 0.0, "rho": 0.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "sgm_noise_multiplier": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "postprocessing_enable_in_main_ui": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "disabled_extensions": [ "canvas-zoom", "infinite-zoom-automatic1111-webui" ], "disable_all_extensions": "none", "restore_config_state_file": "", "sd_checkpoint_hash": "1fe6c7ec54c786040cdabc7b4e89720069d97096922e20d01f13e7764412b47f", "ldsr_steps": 100, "ldsr_cached": false, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "SWIN_torch_compile": false, "lora_functional": false, "sd_lora": "None", "lora_preferred_name": "Alias from file", "lora_add_hashes_to_infotext": true, "lora_show_all": false, "lora_hide_unknown_for_versions": [], "lora_in_memory_limit": 0, "extra_options_txt2img": [], "extra_options_img2img": [], "extra_options_cols": 1, "extra_options_accordion": false, "canvas_hotkey_zoom": "Alt", "canvas_hotkey_adjust": "Ctrl", "canvas_hotkey_move": "F", "canvas_hotkey_fullscreen": "S", "canvas_hotkey_reset": "R", "canvas_hotkey_overlap": "O", "canvas_show_tooltip": true, "canvas_auto_expand": true, "canvas_blur_prompt": false, "canvas_disabled_functions": [ "Overlap" ], "canvas_zoom_undo_extra_key": "Ctrl", "canvas_zoom_hotkey_undo": "Z", "canvas_zoom_inc_brush_size": "]", "canvas_zoom_dec_brush_size": "[", "canvas_zoom_hotkey_open_colorpanel": "Q", "canvas_zoom_hotkey_pin_colorpanel": "T", "canvas_zoom_hotkey_dropper": "A", "canvas_zoom_hotkey_fill": "X", "canvas_zoom_hotkey_transparency": "C", "canvas_zoom_hide_btn": true, "canvas_zoom_mask_clear": true, "canvas_zoom_enable_integration": true, "canvas_zoom_brush_size": 200, "canvas_zoom_transparency_level": 70, "canvas_zoom_brush_opacity": false, "canvas_zoom_inpaint_label": true, "canvas_zoom_inpaint_warning": true, "canvas_zoom_inpaint_change_btn_color": false, "canvas_zoom_inpaint_btn_color": "#C33227", "canvas_zoom_brush_outline": false, "canvas_zoom_add_buttons": false, "canvas_zoom_draw_staight_lines": false, "canvas_zoom_inpaint_brushcolor": "#000000", "canvas_zoom_disabled_functions": [ "Overlap" ], "ad_max_models": 2, "ad_extra_models_dir": "", "ad_save_previews": false, "ad_save_images_before": false, "ad_only_seleted_scripts": true, "ad_script_names": "dynamic_prompting,dynamic_thresholding,wildcard_recursive,wildcards,lora_block_weight", "ad_bbox_sortby": "None", "ad_same_seed_for_each_tap": false, "control_net_detectedmap_dir": "detected_maps", "control_net_models_path": "", "control_net_modules_path": "", "control_net_unit_count": 3, "control_net_model_cache_size": 1, "control_net_inpaint_blur_sigma": 7, "control_net_no_high_res_fix": false, "control_net_no_detectmap": false, "control_net_detectmap_autosaving": false, "control_net_allow_script_control": false, "control_net_sync_field_args": true, "controlnet_show_batch_images_in_ui": false, "controlnet_increment_seed_during_batch": false, "controlnet_disable_control_type": false, "controlnet_disable_openpose_edit": false, "controlnet_ignore_noninpaint_mask": false, "infzoom_outpath": "outputs", "infzoom_outSUBpath": "infinite-zooms", "infzoom_outsizeW": 512, "infzoom_outsizeH": 512, "infzoom_ffprobepath": "", "infzoom_defPrompt": "{\n\t\"prePrompt\": \"Huge spectacular Waterfall in \",\n\t\"prompts\": {\n\t\t\"data\": [\n\t\t\t[0, \"a dense tropical forest\"],\n\t\t\t[2, \"a Lush jungle\"],\n\t\t\t[3, \"a Thick rainforest\"],\n\t\t\t[5, \"a Verdant canopy\"]\n\t\t]\n\t},\n\t\"postPrompt\": \"epic perspective,(vegetation overgrowth:1.3)(intricate, ornamentation:1.1),(baroque:1.1), fantasy, (realistic:1) digital painting , (magical,mystical:1.2) , (wide angle shot:1.4), (landscape composed:1.2)(medieval:1.1),(tropical forest:1.4),(river:1.3) volumetric lighting ,epic, style by Alex Horley Wenjun Lin greg rutkowski Ruan Jia (Wayne Barlowe:1.2)\",\n\t\"negPrompt\": \"frames, border, edges, borderline, text, character, duplicate, error, out of frame, watermark, low quality, ugly, deformed, blur, bad-artist\"\n}", "infzoom_collectAllResources": false }, "Startup": { "total": 34.07622504234314, "records": { "initial startup": 0.001544952392578125, "prepare environment/checks": 6.222724914550781e-05, "prepare environment/git version info": 0.02249598503112793, "prepare environment/torch GPU test": 5.922531366348267, "prepare environment/clone repositores": 0.04510498046875, "prepare environment/run extensions installers/ultimate-upscale-for-automatic1111": 0.0014960765838623047, "prepare environment/run extensions installers/adetailer": 0.10173654556274414, "prepare environment/run extensions installers/controlnet": 0.3862791061401367, "prepare environment/run extensions installers": 0.4895336627960205, "prepare environment": 6.597450017929077, "launcher": 0.0027039051055908203, "import torch": 5.278466463088989, "import gradio": 1.6153597831726074, "setup paths": 1.7229018211364746, "import ldm": 0.011638164520263672, "import sgm": 5.9604644775390625e-06, "initialize shared": 0.30699753761291504, "other imports": 1.5112371444702148, "opts onchange": 0.0004150867462158203, "setup SD model": 0.00037598609924316406, "setup codeformer": 0.25573277473449707, "setup gfpgan": 0.053580284118652344, "set samplers": 5.745887756347656e-05, "list extensions": 0.00016307830810546875, "restore config state file": 8.106231689453125e-06, "list SD models": 0.02207493782043457, "list localizations": 0.0001595020294189453, "load scripts/custom_code.py": 0.004318952560424805, "load scripts/img2imgalt.py": 0.009326457977294922, "load scripts/loopback.py": 0.0007936954498291016, "load scripts/outpainting_mk_2.py": 0.0007576942443847656, "load scripts/poor_mans_outpainting.py": 0.0008897781372070312, "load scripts/postprocessing_codeformer.py": 0.0007886886596679688, "load scripts/postprocessing_gfpgan.py": 0.00041556358337402344, "load scripts/postprocessing_upscale.py": 0.0007939338684082031, "load scripts/prompt_matrix.py": 0.0007572174072265625, "load scripts/prompts_from_file.py": 0.0009541511535644531, "load scripts/refiner.py": 0.0014998912811279297, "load scripts/sd_upscale.py": 0.0006248950958251953, "load scripts/seed.py": 0.0020329952239990234, "load scripts/xyz_grid.py": 0.003252744674682617, "load scripts/!adetailer.py": 0.5238430500030518, "load scripts/adapter.py": 0.0015759468078613281, "load scripts/api.py": 0.48351216316223145, "load scripts/batch_hijack.py": 0.0010502338409423828, "load scripts/cldm.py": 0.0010056495666503906, "load scripts/controlmodel_ipadapter.py": 0.000957489013671875, "load scripts/controlnet.py": 0.1613450050354004, "load scripts/controlnet_diffusers.py": 0.00029850006103515625, "load scripts/controlnet_lllite.py": 0.00023412704467773438, "load scripts/controlnet_lora.py": 0.00020813941955566406, "load scripts/controlnet_model_guess.py": 0.000217437744140625, "load scripts/controlnet_version.py": 0.00020170211791992188, "load scripts/enums.py": 0.0009853839874267578, "load scripts/external_code.py": 0.00012874603271484375, "load scripts/global_state.py": 0.0003674030303955078, "load scripts/hook.py": 0.0006949901580810547, "load scripts/infotext.py": 0.00018358230590820312, "load scripts/logging.py": 0.0002875328063964844, "load scripts/lvminthin.py": 0.00033974647521972656, "load scripts/movie2movie.py": 0.0006403923034667969, "load scripts/processor.py": 0.0003600120544433594, "load scripts/utils.py": 0.00033664703369140625, "load scripts/xyz_grid_support.py": 0.0009984970092773438, "load scripts/ultimate-upscale.py": 0.001844167709350586, "load scripts/ldsr_model.py": 0.02965855598449707, "load scripts/lora_script.py": 0.16206073760986328, "load scripts/scunet_model.py": 0.03057408332824707, "load scripts/swinir_model.py": 0.03601717948913574, "load scripts/hotkey_config.py": 0.0014574527740478516, "load scripts/extra_options_section.py": 0.0014760494232177734, "load scripts": 1.4701149463653564, "load upscalers": 0.004113197326660156, "refresh VAE": 0.0009195804595947266, "refresh textual inversion templates": 4.220008850097656e-05, "scripts list_optimizers": 0.000286102294921875, "scripts list_unets": 7.152557373046875e-06, "reload hypernetworks": 0.019298076629638672, "initialize extra networks": 0.004291057586669922, "scripts before_ui_callback": 0.00014853477478027344, "create ui": 0.6981532573699951, "gradio launch": 14.60429334640503, "add APIs": 0.00873565673828125, "app_started_callback/api.py": 0.002362966537475586, "app_started_callback/lora_script.py": 0.0003113746643066406, "app_started_callback": 0.002681255340576172 } }, "Packages": [ "absl-py==2.0.0", "accelerate==0.21.0", "addict==2.4.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohttp==3.8.6", "aiosignal==1.3.1", "altair==5.1.2", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "async-timeout==4.0.3", "attrs==23.1.0", "basicsr==1.4.2", "beautifulsoup4==4.12.2", "blendmodes==2022", "boltons==23.1.1", "cachetools==5.3.2", "certifi==2023.7.22", "cffi==1.16.0", "charset-normalizer==3.3.2", "clean-fid==0.1.35", "click==8.1.7", "clip==1.0", "cmake==3.27.7", "contourpy==1.2.0", "cssselect2==0.7.0", "cycler==0.12.1", "deprecation==2.1.0", "einops==0.4.1", "exceptiongroup==1.1.3", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.3.1", "filelock==3.13.1", "filterpy==1.4.5", "flatbuffers==23.5.26", "fonttools==4.44.0", "frozenlist==1.4.0", "fsspec==2023.10.0", "ftfy==6.1.1", "future==0.18.3", "fvcore==0.1.5.post20221221", "gdown==4.7.1", "gfpgan==1.3.8", "gitdb==4.0.11", "gitpython==3.1.32", "google-auth-oauthlib==1.1.0", "google-auth==2.23.4", "gradio-client==0.5.0", "gradio==3.41.2", "grpcio==1.59.2", "h11==0.12.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.19.0", "idna==3.4", "imageio-ffmpeg==0.4.9", "imageio==2.32.0", "importlib-metadata==6.8.0", "importlib-resources==6.1.1", "inflection==0.5.1", "iopath==0.1.9", "jinja2==3.1.2", "jsonmerge==1.8.0", "jsonschema-specifications==2023.7.1", "jsonschema==4.19.2", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy-loader==0.3", "lightning-utilities==0.9.0", "lit==17.0.4", "llvmlite==0.41.1", "lmdb==1.4.1", "lpips==0.1.4", "lxml==4.9.3", "markdown-it-py==3.0.0", "markdown==3.5.1", "markupsafe==2.1.3", "matplotlib==3.8.1", "mdurl==0.1.2", "mediapipe==0.10.8", "mpmath==1.3.0", "multidict==6.0.4", "networkx==3.2.1", "numba==0.58.1", "numpy==1.23.5", "oauthlib==3.2.2", "omegaconf==2.2.3", "open-clip-torch==2.20.0", "opencv-contrib-python==4.8.1.78", "opencv-python==4.8.1.78", "orjson==3.9.10", "packaging==23.2", "pandas==2.1.3", "piexif==1.1.3", "pillow==9.5.0", "pip==22.3.1", "platformdirs==4.0.0", "portalocker==2.8.2", "protobuf==3.20.0", "psutil==5.9.5", "py-cpuinfo==9.0.0", "pyasn1-modules==0.3.0", "pyasn1==0.5.0", "pycparser==2.21", "pydantic==1.10.13", "pydub==0.25.1", "pygments==2.16.1", "pyparsing==3.1.1", "pysocks==1.7.1", "python-dateutil==2.8.2", "python-multipart==0.0.6", "pytorch-lightning==1.9.4", "pytz==2023.3.post1", "pywavelets==1.4.1", "pyyaml==6.0.1", "realesrgan==0.3.0", "referencing==0.30.2", "regex==2023.10.3", "reportlab==4.0.7", "requests-oauthlib==1.3.1", "requests==2.31.0", "resize-right==0.0.2", "rich==13.6.0", "rpds-py==0.12.0", "rsa==4.9", "safetensors==0.3.1", "scikit-image==0.21.0", "scipy==1.11.3", "seaborn==0.13.0", "semantic-version==2.10.0", "sentencepiece==0.1.99", "setuptools==65.5.0", "six==1.16.0", "smmap==5.0.1", "sniffio==1.3.0", "sounddevice==0.4.6", "soupsieve==2.5", "starlette==0.26.1", "svglib==1.5.1", "sympy==1.12", "tabulate==0.9.0", "tb-nightly==2.16.0a20231112", "tensorboard-data-server==0.7.2", "termcolor==2.3.0", "thop==0.1.1.post2209072238", "tifffile==2023.9.26", "timm==0.9.2", "tinycss2==1.2.1", "tokenizers==0.13.3", "tomesd==0.1.3", "tomli==2.0.1", "toolz==0.12.0", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.0", "torchsde==0.2.5", "torchvision==0.15.2+cu118", "tqdm==4.66.1", "trampoline==0.1.2", "transformers==4.30.2", "triton==2.0.0", "typing-extensions==4.8.0", "tzdata==2023.3", "ultralytics==8.0.208", "urllib3==2.0.7", "uvicorn==0.24.0.post1", "wcwidth==0.2.9", "webencodings==0.5.1", "websockets==11.0.3", "werkzeug==3.0.1", "yacs==0.1.8", "yapf==0.40.2", "yarl==1.9.2", "zipp==3.17.0" ] }

What browsers do you use to access the UI ?

Google Chrome

Console logs

./webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on ubuntu user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Cannot locate TCMalloc (improves CPU memory usage)
Python 3.10.9 | packaged by conda-forge | (main, Feb  2 2023, 20:20:04) [GCC 11.3.0]
Version: v1.6.0-2-g4afaaf8a
Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
Launching Web UI with arguments: --share --enable-insecure-extension-access --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
[-] ADetailer initialized. version: 23.11.0, num models: 9
2023-11-29 02:30:26,797 - ControlNet - INFO - ControlNet v1.1.417
ControlNet preprocessor location: /home/ubuntu/stable-diffusion-webui/extensions/controlnet/annotator/downloads
2023-11-29 02:30:26,955 - ControlNet - INFO - ControlNet v1.1.417
Loading weights [1fe6c7ec54] from /home/ubuntu/stable-diffusion-webui/models/Stable-diffusion/juggernautXL_version6Rundiffusion.safetensors
Running on local URL:  http://127.0.0.1:7860
Creating model from config: /home/ubuntu/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Running on public URL: https://2f7bac91d1214523be.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Startup time: 34.1s (prepare environment: 6.6s, import torch: 5.3s, import gradio: 1.6s, setup paths: 1.7s, initialize shared: 0.3s, other imports: 1.5s, setup codeformer: 0.3s, load scripts: 1.5s, create ui: 0.7s, gradio launch: 14.6s).
Applying attention optimization: Doggettx... done.
Model loaded in 57.6s (load weights from disk: 3.7s, create model: 0.6s, apply weights to model: 52.4s, apply float(): 0.1s, move model to device: 0.1s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.5s).
 94%|█████████████████████████████████████████████████████████████████████████████████████████████▊      | 15/16 [00:23<00:01,  1.57s/it]
*** Error completing request███████████████████████████████████████████████████████████████████████▊     | 15/16 [00:10<00:00,  1.36it/s]
*** Arguments: ('task(g2stg52398aqzeo)', 3, 'futuristic drone flight out of dark tunnel <lora:Detail_Enhancer:1>', '', [], None, None, None, <PIL.Image.Image image mode=RGB size=4608x2592 at 0x7F75741D5210>, <PIL.Image.Image image mode=RGB size=4608x2592 at 0x7F7560ACD840>, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x7f7560abe500>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "/home/ubuntu/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/home/ubuntu/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/modules/img2img.py", line 208, in img2img
        processed = process_images(p)
      File "/home/ubuntu/stable-diffusion-webui/modules/processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "/home/ubuntu/stable-diffusion-webui/extensions/controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/modules/processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/home/ubuntu/stable-diffusion-webui/modules/processing.py", line 1528, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_models_xl.py", line 37, in apply_model
        return self.model(x, t, cond)
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/openaimodel.py", line 993, in forward
        h = module(h, emb, context)
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/openaimodel.py", line 100, in forward
        x = layer(x, context)
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py", line 627, in forward
        x = block(x, context=context[i])
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py", line 459, in forward
        return checkpoint(
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/util.py", line 165, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/util.py", line 182, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "/home/ubuntu/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py", line 467, in _forward
        self.attn1(
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ubuntu/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 266, in split_cross_attention_forward
        s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
      File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/functional.py", line 378, in einsum
        return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
    torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 14.75 GiB total capacity; 14.22 GiB already allocated; 63.06 MiB free; 14.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

---

Additional information

No response

seddouguim commented 1 year ago

I apologize to the devs for even flagging this as a bug report in the first place.

I didn't realize that every model that the extensions you're using are also loaded in the gpu.

If anyone experiences the same issue, meaning that their gpu is saturated at startup, then make sure that first you're not loading more than one base model. Also make sure that the extensions you're using, especially controlnet and addetailer are not loading all their models at once.

Again, sorry for flagging this and let's mark it as solved.