AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
136.47k stars 26k forks source link

[Bug]: Out of Memory on v1.6.0-2-g4afaaf8a (worked fine before update) #13906

Open thundercat71 opened 8 months ago

thundercat71 commented 8 months ago

Is there an existing issue for this?

What happened?

This fault started with the [v1.6.0-2-g4afaaf8a] update.

Generation will work fine on 512 by 512 but will report errors on 1024 by 1024 with cuda memory issues.

Will work but VERY slow if I add --lowvram to startup I have 2 other systems running the interface and so far have not dared to update them.

Steps to reproduce the problem

1.Load a SDXL checkpoint

  1. set a prompt
  2. Set res to 1024 by 1024
  3. click generate
  4. out of memory

What should have happened?

Generate an image ;)

Sysinfo

{ "Platform": "Windows-10-10.0.19044-SP0", "Python": "3.10.6", "Version": "v1.6.0-2-g4afaaf8a", "Commit": "4afaaf8a020c1df457bcf7250cb1c7f609699fa7", "Script path": "D:\SD\stable-diffusion-webui", "Data path": "D:\SD\stable-diffusion-webui", "Extensions dir": "D:\SD\stable-diffusion-webui\extensions", "Checksum": "82812e726fb9bdf38323a137281fdb50cf154e4283202e06c9addfb939c4c7cc", "Commandline": [ "launch.py", "--xformers", "--autolaunch", "--theme", "dark", "--ckpt-dir", "D:\SD\models\Stable-diffusion", "--lora-dir", "D:\SD\models\Lora", "--medvram", "--no-half" ], "Torch env info": { "torch_version": "2.0.1+cu118", "is_debug_build": "False", "cuda_compiled_version": "11.8", "gcc_version": null, "clang_version": null, "cmake_version": "version 3.27.5", "os": "Microsoft Windows 10 Enterprise", "libc_version": "N/A", "python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)", "python_platform": "Windows-10-10.0.19044-SP0", "is_cuda_available": "True", "cuda_runtime_version": "12.3.52\r", "cuda_module_loading": "LAZY", "nvidia_driver_version": "546.01", "nvidia_gpu_models": [ "GPU 0: NVIDIA GeForce GTX 1080 Ti", "GPU 1: NVIDIA GeForce GTX 1080 Ti" ], "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.0", "torchsde==0.2.5", "torchvision==0.15.2+cu118" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture=9", "CurrentClockSpeed=3696", "DeviceID=CPU0", "Family=198", "L2CacheSize=1536", "L2CacheSpeed=", "Manufacturer=GenuineIntel", "MaxClockSpeed=3696", "Name=Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz", "ProcessorType=3", "Revision=" ] }, "Exceptions": [ { "exception": "CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 11.00 GiB total capacity; 10.03 GiB already allocated; 0 bytes free; 10.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF", "traceback": [ [ "D:\SD\stable-diffusion-webui\modules\call_queue.py, line 57, f", "res = list(func(*args, kwargs))" ], [ "D:\SD\stable-diffusion-webui\modules\call_queue.py, line 36, f", "res = func(*args, *kwargs)" ], [ "D:\SD\stable-diffusion-webui\modules\txt2img.py, line 55, txt2img", "processed = processing.process_images(p)" ], [ "D:\SD\stable-diffusion-webui\modules\processing.py, line 732, process_images", "res = process_images_inner(p)" ], [ "D:\SD\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py, line 42, processing_process_images_hijack", "return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\modules\processing.py, line 867, process_images_inner", "samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)" ], [ "D:\SD\stable-diffusion-webui\modules\processing.py, line 1140, sample", "samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))" ], [ "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py, line 235, sample", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "D:\SD\stable-diffusion-webui\modules\sd_samplers_common.py, line 261, launch_sampling", "return func()" ], [ "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py, line 235, ", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py, line 115, decorate_context", "return func(*args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py, line 594, sample_dpmpp_2m", "denoised = model(x, sigmas[i] * s_in, *extra_args)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py, line 169, forward", "x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py, line 112, forward", "eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py, line 138, get_eps", "return self.inner_model.apply_model(args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\modules\sd_models_xl.py, line 37, apply_model", "return self.model(x, t, cond)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1538, _call_impl", "result = forward_call(*args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\modules\sd_hijack_utils.py, line 17, ", "setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs))" ], [ "D:\SD\stable-diffusion-webui\modules\sd_hijack_utils.py, line 28, call", "return self.__orig_func(*args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py, line 28, forward", "return self.diffusion_model(" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py, line 993, forward", "h = module(h, emb, context)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py, line 100, forward", "x = layer(x, context)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py, line 627, forward", "x = block(x, context=context[i])" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py, line 459, forward", "return checkpoint(" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py, line 165, checkpoint", "return CheckpointFunction.apply(func, len(inputs), args)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py, line 506, apply", "return super().apply(args, kwargs) # type: ignore[misc]" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py, line 182, forward", "output_tensors = ctx.run_function(ctx.input_tensors)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py, line 483, _forward", "x = self.ff(self.norm3(x)) + x" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(args, *kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py, line 108, forward", "return self.net(x)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(args, kwargs)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py, line 217, forward", "input = module(input)" ], [ "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py, line 89, forward", "return x F.gelu(gate)" ] ] } ], "CPU": { "model": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel", "count logical": 12, "count physical": 6 }, "RAM": { "total": "32GB", "used": "14GB", "free": "17GB" }, "Extensions": [ { "name": "sd-webui-controlnet", "path": "D:\SD\stable-diffusion-webui\extensions\sd-webui-controlnet", "version": "05ef0b1c", "branch": "main", "remote": "https://github.com/Mikubill/sd-webui-controlnet.git" }, { "name": "ultimate-upscale-for-automatic1111", "path": "D:\SD\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111", "version": "728ffcec", "branch": "master", "remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git" } ], "Inactive extensions": [ { "name": "sd-webui-aspect-ratio-helper", "path": "D:\SD\stable-diffusion-webui\extensions\sd-webui-aspect-ratio-helper", "version": "99fcf9b0", "branch": "main", "remote": "https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git" } ], "Environment": { "COMMANDLINE_ARGS": "--xformers --autolaunch --theme dark --ckpt-dir 'D:\SD\models\Stable-diffusion' --lora-dir 'D:\SD\models\Lora' --medvram --no-half", "GRADIO_ANALYTICS_ENABLED": "False" }, "Config": { "samples_save": true, "samples_format": "png", "samples_filename_pattern": "", "save_images_add_number": true, "grid_save": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "grid_prevent_empty_spots": false, "grid_zip_filename_pattern": "", "n_rows": -1, "font": "", "grid_text_active_color": "#000000", "grid_text_inactive_color": "#999999", "grid_background_color": "#ffffff", "enable_pnginfo": true, "save_txt": false, "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "jpeg_quality": 80, "webp_lossless": false, "export_for_4chan": true, "img_downscale_threshold": 4.0, "target_side_length": 4000, "img_max_size_mp": 200, "use_original_name_batch": true, "use_upscaler_name_as_suffix": false, "save_selected_only": true, "save_init_img": false, "temp_dir": "", "clean_temp_dir_at_start": false, "save_incomplete_images": false, "outdir_samples": "", "outdir_txt2img_samples": "outputs/txt2img-images", "outdir_img2img_samples": "outputs/img2img-images", "outdir_extras_samples": "outputs/extras-images", "outdir_grids": "", "outdir_txt2img_grids": "outputs/txt2img-grids", "outdir_img2img_grids": "outputs/img2img-grids", "outdir_save": "log/images", "outdir_init_images": "outputs/init-images", "save_to_dirs": true, "grid_save_to_dirs": true, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "realesrgan_enabled_models": [ "R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B" ], "upscaler_for_img2img": null, "face_restoration": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "face_restoration_unload": false, "auto_launch_browser": "Local", "show_warnings": false, "show_gradio_deprecation_warnings": true, "memmon_poll_rate": 8, "samples_log_stdout": false, "multiple_tqdm": true, "print_hypernet_extra": false, "list_hidden_files": true, "disable_mmap_load_safetensors": false, "hide_ldm_prints": true, "api_enable_requests": true, "api_forbid_local_requests": true, "api_useragent": "", "unload_models_when_training": false, "pin_memory": false, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 500, "training_xattention_optimizations": false, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120, "sd_model_checkpoint": "juggernautXL_version6Rundiffusion.safetensors [1fe6c7ec54]", "sd_checkpoints_limit": 1, "sd_checkpoints_keep_in_cpu": true, "sd_checkpoint_cache": 0, "sd_unet": "Automatic", "enable_quantization": false, "enable_emphasis": true, "enable_batch_seeds": true, "comma_padding_backtrack": 20, "CLIP_stop_at_last_layers": 1, "upcast_attn": false, "randn_source": "GPU", "tiling": false, "hires_fix_refiner_pass": "second pass", "sdxl_crop_top": 0, "sdxl_crop_left": 0, "sdxl_refiner_low_aesthetic_score": 2.5, "sdxl_refiner_high_aesthetic_score": 6.0, "sd_vae_explanation": "VAE is a neural network that transforms a standard RGB\nimage into latent space representation and back. Latent space representation is what stable diffusion is working on during sampling\n(i.e. when the progress bar is between empty and full). For txt2img, VAE is used to create a resulting image after the sampling is finished.\nFor img2img, VAE is used to process user's input image before the sampling, and to create an image after sampling.", "sd_vae_checkpoint_cache": 0, "sd_vae": "Automatic", "sd_vae_overrides_per_model_preferences": true, "auto_vae_precision": true, "sd_vae_encode_method": "Full", "sd_vae_decode_method": "Full", "inpainting_mask_weight": 1.0, "initial_noise_multiplier": 1.0, "img2img_extra_noise": 0.0, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "img2img_editor_height": 720, "img2img_sketch_default_brush_color": "#ffffff", "img2img_inpaint_mask_brush_color": "#ffffff", "img2img_inpaint_sketch_default_brush_color": "#ffffff", "return_mask": false, "return_mask_composite": false, "cross_attention_optimization": "Automatic", "s_min_uncond": 0.0, "token_merging_ratio": 0.0, "token_merging_ratio_img2img": 0.0, "token_merging_ratio_hr": 0.0, "pad_cond_uncond": false, "persistent_cond_cache": true, "batch_cond_uncond": true, "use_old_emphasis_implementation": false, "use_old_karras_scheduler_sigmas": false, "no_dpmpp_sde_batch_determinism": false, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "hires_fix_use_firstpass_conds": false, "use_old_scheduling": false, "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": false, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 24, "interrogate_clip_max_length": 48, "interrogate_clip_dict_limit": 1500, "interrogate_clip_skip_categories": [], "interrogate_deepbooru_score_threshold": 0.5, "deepbooru_sort_alpha": true, "deepbooru_use_spaces": true, "deepbooru_escape": true, "deepbooru_filter_tags": "", "extra_networks_show_hidden_directories": true, "extra_networks_hidden_models": "When searched", "extra_networks_default_multiplier": 1.0, "extra_networks_card_width": 0, "extra_networks_card_height": 0, "extra_networks_card_text_scale": 1.0, "extra_networks_card_show_desc": true, "extra_networks_add_text_separator": " ", "ui_extra_networks_tab_reorder": "", "textual_inversion_print_at_load": false, "textual_inversion_add_hashes_to_infotext": true, "sd_hypernetwork": "None", "localization": "None", "gradio_theme": "Default", "gradio_themes_cache": true, "gallery_height": "", "return_grid": true, "do_not_show_images": false, "send_seed": true, "send_size": true, "js_modal_lightbox": true, "js_modal_lightbox_initially_zoomed": true, "js_modal_lightbox_gamepad": false, "js_modal_lightbox_gamepad_repeat": 250, "show_progress_in_title": true, "samplers_in_dropdown": true, "dimensions_and_batch_together": true, "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\/!?%^*;:{}=`~()", "keyedit_move": true, "quicksettings_list": [ "sd_model_checkpoint", "sd_vae" ], "ui_tab_order": [], "hidden_tabs": [], "ui_reorder_list": [], "hires_fix_show_sampler": false, "hires_fix_show_prompts": false, "disable_token_counters": false, "add_model_hash_to_info": true, "add_model_name_to_info": true, "add_user_name_to_info": false, "add_version_to_infotext": true, "disable_weights_auto_swap": true, "infotext_styles": "Apply if any", "show_progressbar": true, "live_previews_enable": true, "live_previews_image_format": "png", "show_progress_grid": true, "show_progress_every_n_steps": 10, "show_progress_type": "Approx NN", "live_preview_allow_lowvram_full": false, "live_preview_content": "Prompt", "live_preview_refresh_period": 1000, "live_preview_fast_interrupt": false, "hide_samplers": [], "eta_ddim": 0.0, "eta_ancestral": 1.0, "ddim_discretize": "uniform", "s_churn": 0.0, "s_tmin": 0.0, "s_tmax": 0.0, "s_noise": 1.0, "k_sched_type": "Automatic", "sigma_min": 0.0, "sigma_max": 0.0, "rho": 0.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "sgm_noise_multiplier": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "postprocessing_enable_in_main_ui": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "disabled_extensions": [ "sd-webui-aspect-ratio-helper" ], "disable_all_extensions": "none", "restore_config_state_file": "", "sd_checkpoint_hash": "1fe6c7ec54c786040cdabc7b4e89720069d97096922e20d01f13e7764412b47f", "ldsr_steps": 100, "ldsr_cached": false, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "lora_functional": false, "sd_lora": "None", "lora_preferred_name": "Alias from file", "lora_add_hashes_to_infotext": true, "lora_show_all": false, "lora_hide_unknown_for_versions": [], "lora_in_memory_limit": 0, "extra_options_txt2img": [ "face_restoration", "tiling" ], "extra_options_img2img": [ "face_restoration", "tiling" ], "extra_options_cols": 1, "extra_options_accordion": false, "canvas_hotkey_zoom": "Alt", "canvas_hotkey_adjust": "Ctrl", "canvas_hotkey_move": "F", "canvas_hotkey_fullscreen": "S", "canvas_hotkey_reset": "R", "canvas_hotkey_overlap": "O", "canvas_show_tooltip": true, "canvas_auto_expand": true, "canvas_blur_prompt": false, "canvas_disabled_functions": [ "Overlap" ], "control_net_detectedmap_dir": "detected_maps", "control_net_models_path": "", "control_net_modules_path": "", "control_net_unit_count": 3, "control_net_model_cache_size": 1, "control_net_inpaint_blur_sigma": 7, "control_net_no_high_res_fix": false, "control_net_no_detectmap": false, "control_net_detectmap_autosaving": false, "control_net_allow_script_control": false, "control_net_sync_field_args": true, "controlnet_show_batch_images_in_ui": false, "controlnet_increment_seed_during_batch": false, "controlnet_disable_control_type": false, "controlnet_disable_openpose_edit": false, "controlnet_ignore_noninpaint_mask": false }, "Startup": { "total": 32.43173336982727, "records": { "initial startup": 0.008999109268188477, "prepare environment/checks": 0.04000091552734375, "prepare environment/git version info": 0.2520027160644531, "prepare environment/torch GPU test": 7.33830189704895, "prepare environment/clone repositores": 0.4564642906188965, "prepare environment/run extensions installers/sd-webui-controlnet": 0.8070003986358643, "prepare environment/run extensions installers/ultimate-upscale-for-automatic1111": 0.0009996891021728516, "prepare environment/run extensions installers": 0.8080000877380371, "prepare environment": 9.34259295463562, "launcher": 0.031000614166259766, "import torch": 7.956767320632935, "import gradio": 3.415480613708496, "setup paths": 2.5790553092956543, "import ldm": 0.019971132278442383, "import sgm": 0.0, "initialize shared": 0.5260107517242432, "other imports": 2.1720759868621826, "opts onchange": 0.0, "setup SD model": 0.003000020980834961, "setup codeformer": 0.45697927474975586, "setup gfpgan": 0.09203386306762695, "set samplers": 0.0, "list extensions": 0.0009984970092773438, "restore config state file": 0.0, "list SD models": 0.551983118057251, "list localizations": 0.0009987354278564453, "load scripts/custom_code.py": 0.014998912811279297, "load scripts/img2imgalt.py": 0.010031461715698242, "load scripts/loopback.py": 0.011971712112426758, "load scripts/outpainting_mk_2.py": 0.004998683929443359, "load scripts/poor_mans_outpainting.py": 0.008029699325561523, "load scripts/postprocessing_codeformer.py": 0.011970758438110352, "load scripts/postprocessing_gfpgan.py": 0.008002519607543945, "load scripts/postprocessing_upscale.py": 0.0009999275207519531, "load scripts/prompt_matrix.py": 0.002001047134399414, "load scripts/prompts_from_file.py": 0.00099945068359375, "load scripts/refiner.py": 0.001999378204345703, "load scripts/sd_upscale.py": 0.0, "load scripts/seed.py": 0.002000093460083008, "load scripts/xyz_grid.py": 0.014028072357177734, "load scripts/adapter.py": 0.003999233245849609, "load scripts/api.py": 2.197295665740967, "load scripts/batch_hijack.py": 0.012969017028808594, "load scripts/cldm.py": 0.0009989738464355469, "load scripts/controlmodel_ipadapter.py": 0.0010008811950683594, "load scripts/controlnet.py": 0.737652063369751, "load scripts/controlnet_diffusers.py": 0.00099945068359375, "load scripts/controlnet_lllite.py": 0.0, "load scripts/controlnet_lora.py": 0.0, "load scripts/controlnet_model_guess.py": 0.0010004043579101562, "load scripts/controlnet_version.py": 0.0, "load scripts/external_code.py": 0.0009989738464355469, "load scripts/global_state.py": 0.0, "load scripts/hook.py": 0.0010001659393310547, "load scripts/infotext.py": 0.0010335445404052734, "load scripts/logging.py": 0.0, "load scripts/lvminthin.py": 0.0, "load scripts/movie2movie.py": 0.006994724273681641, "load scripts/processor.py": 0.0, "load scripts/utils.py": 0.0010030269622802734, "load scripts/xyz_grid_support.py": 0.005967617034912109, "load scripts/ultimate-upscale.py": 0.01203012466430664, "load scripts/ldsr_model.py": 0.08800148963928223, "load scripts/lora_script.py": 0.23100519180297852, "load scripts/scunet_model.py": 0.05600142478942871, "load scripts/swinir_model.py": 0.05900096893310547, "load scripts/hotkey_config.py": 0.001001596450805664, "load scripts/extra_options_section.py": 0.0009686946868896484, "load scripts": 3.5129549503326416, "load upscalers": 0.004034519195556641, "refresh VAE": 0.0029659271240234375, "refresh textual inversion templates": 0.0, "scripts list_optimizers": 0.0010318756103515625, "scripts list_unets": 0.0, "reload hypernetworks": 0.0009982585906982422, "initialize extra networks": 0.06100034713745117, "scripts before_ui_callback": 0.0010008811950683594, "create ui": 0.7179553508758545, "gradio launch": 1.3966662883758545, "add APIs": 0.02100062370300293, "app_started_callback/api.py": 0.002000093460083008, "app_started_callback/lora_script.py": 0.0, "app_started_callback": 0.002000093460083008 } }, "Packages": [ "absl-py==2.0.0", "accelerate==0.21.0", "addict==2.4.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohttp==3.8.6", "aiosignal==1.3.1", "altair==5.1.2", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "async-timeout==4.0.3", "attrs==23.1.0", "basicsr==1.4.2", "beautifulsoup4==4.12.2", "blendmodes==2022", "boltons==23.1.1", "cachetools==5.3.2", "certifi==2023.7.22", "cffi==1.16.0", "charset-normalizer==3.3.2", "clean-fid==0.1.35", "click==8.1.7", "clip==1.0", "colorama==0.4.6", "contourpy==1.2.0", "cssselect2==0.7.0", "cycler==0.12.1", "deprecation==2.1.0", "einops==0.4.1", "exceptiongroup==1.1.3", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.3.1", "filelock==3.13.1", "filterpy==1.4.5", "flatbuffers==23.5.26", "fonttools==4.44.0", "frozenlist==1.4.0", "fsspec==2023.10.0", "ftfy==6.1.1", "future==0.18.3", "fvcore==0.1.5.post20221221", "gdown==4.7.1", "gfpgan==1.3.8", "gitdb==4.0.11", "gitpython==3.1.32", "google-auth-oauthlib==1.1.0", "google-auth==2.23.4", "gradio-client==0.5.0", "gradio==3.41.2", "grpcio==1.59.2", "h11==0.12.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.19.0", "idna==3.4", "imageio==2.32.0", "importlib-metadata==6.8.0", "importlib-resources==6.1.1", "inflection==0.5.1", "iopath==0.1.9", "jinja2==3.1.2", "jsonmerge==1.8.0", "jsonschema-specifications==2023.7.1", "jsonschema==4.19.2", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy-loader==0.3", "lightning-utilities==0.9.0", "llvmlite==0.41.1", "lmdb==1.4.1", "lpips==0.1.4", "lxml==4.9.3", "markdown==3.5.1", "markupsafe==2.1.3", "matplotlib==3.8.1", "mediapipe==0.10.7", "mpmath==1.3.0", "multidict==6.0.4", "networkx==3.2.1", "numba==0.58.1", "numpy==1.23.5", "oauthlib==3.2.2", "omegaconf==2.2.3", "open-clip-torch==2.20.0", "opencv-contrib-python==4.8.1.78", "opencv-python==4.8.1.78", "orjson==3.9.10", "packaging==23.2", "pandas==2.1.2", "piexif==1.1.3", "pillow==9.5.0", "pip==22.2.1", "platformdirs==3.11.0", "portalocker==2.8.2", "protobuf==3.20.0", "psutil==5.9.5", "pyasn1-modules==0.3.0", "pyasn1==0.5.0", "pycparser==2.21", "pydantic==1.10.13", "pydub==0.25.1", "pyparsing==3.1.1", "pysocks==1.7.1", "python-dateutil==2.8.2", "python-multipart==0.0.6", "pytorch-lightning==1.9.4", "pytz==2023.3.post1", "pywavelets==1.4.1", "pywin32==306", "pyyaml==6.0.1", "realesrgan==0.3.0", "referencing==0.30.2", "regex==2023.10.3", "reportlab==4.0.7", "requests-oauthlib==1.3.1", "requests==2.31.0", "resize-right==0.0.2", "rpds-py==0.12.0", "rsa==4.9", "safetensors==0.3.1", "scikit-image==0.21.0", "scipy==1.11.3", "semantic-version==2.10.0", "sentencepiece==0.1.99", "setuptools==63.2.0", "six==1.16.0", "smmap==5.0.1", "sniffio==1.3.0", "sounddevice==0.4.6", "soupsieve==2.5", "starlette==0.26.1", "svglib==1.5.1", "sympy==1.12", "tabulate==0.9.0", "tb-nightly==2.16.0a20231108", "tensorboard-data-server==0.7.2", "termcolor==2.3.0", "tifffile==2023.9.26", "timm==0.9.2", "tinycss2==1.2.1", "tokenizers==0.13.3", "tomesd==0.1.3", "tomli==2.0.1", "toolz==0.12.0", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.0", "torchsde==0.2.5", "torchvision==0.15.2+cu118", "tqdm==4.66.1", "trampoline==0.1.2", "transformers==4.30.2", "typing-extensions==4.8.0", "tzdata==2023.3", "urllib3==2.0.7", "uvicorn==0.24.0.post1", "wcwidth==0.2.9", "webencodings==0.5.1", "websockets==11.0.3", "werkzeug==3.0.1", "xformers==0.0.20", "yacs==0.1.8", "yapf==0.40.2", "yarl==1.9.2", "zipp==3.17.0" ] }

What browsers do you use to access the UI ?

No response

Console logs

venv "D:\SD\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-2-g4afaaf8a
Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
Launching Web UI with arguments: --xformers --autolaunch --theme dark --ckpt-dir D:\SD\models\Stable-diffusion --lora-dir D:\SD\models\Lora --medvram --no-half
2023-11-08 13:57:08,428 - ControlNet - INFO - ControlNet v1.1.416
ControlNet preprocessor location: D:\SD\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-11-08 13:57:09,149 - ControlNet - INFO - ControlNet v1.1.416
Loading weights [1fe6c7ec54] from D:\SD\models\Stable-diffusion\juggernautXL_version6Rundiffusion.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: D:\SD\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 32.4s (prepare environment: 9.3s, import torch: 8.0s, import gradio: 3.4s, setup paths: 2.6s, initialize shared: 0.5s, other imports: 2.2s, setup codeformer: 0.5s, list SD models: 0.6s, load scripts: 3.5s, create ui: 0.7s, gradio launch: 1.4s).
Applying attention optimization: xformers... done.
Model loaded in 9.2s (load weights from disk: 1.5s, create model: 0.7s, apply weights to model: 3.1s, apply float(): 2.1s, calculate empty prompt: 1.7s).
  0%|                                                                                           | 0/20 [00:05<?, ?it/s]
*** Error completing request
*** Arguments: ('task(th7km2n0b2i6dgd)', 'cat', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000247FD14A830>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000247FD14B160>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000247FD14BC40>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000247345F1C60>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "D:\SD\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\SD\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "D:\SD\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "D:\SD\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\SD\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\SD\stable-diffusion-webui\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\SD\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\modules\sd_models_xl.py", line 37, in apply_model
        return self.model(x, t, cond)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\SD\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward
        h = module(h, emb, context)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward
        x = layer(x, context)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 627, in forward
        x = block(x, context=context[i])
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 459, in forward
        return checkpoint(
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 165, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 182, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 483, in _forward
        x = self.ff(self.norm3(x)) + x
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 108, in forward
        return self.net(x)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
        input = module(input)
      File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SD\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 89, in forward
        return x * F.gelu(gate)
    torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 11.00 GiB total capacity; 10.03 GiB already allocated; 0 bytes free; 10.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

Have tried reinstall, no extensions, updated CUDA drivers.

thundercat71 commented 8 months ago

Solved it using --no-half-vae