AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
135.49k stars 25.87k forks source link

[Bug]: LoRA doesn't have impact on the output #14866

Open patrykbart opened 5 months ago

patrykbart commented 5 months ago

Checklist

What happened?

I have been using train_dreambooth_lora_sdxl.py and convert_diffusers_sdxl_lora_to_webui.py to train LoRA for specific character

I am able to use my LoRA using diffusers script and in ComfyUI, but it doesnt work in A1111. It loads without any errors, but has no desired effect on output

Steps to reproduce the problem

  1. Create VM with this docker image: pytorch/pytorch:2.0.0-cuda11.7-cudnn8-devel
  2. Install dependencies:
    
    apt update
    apt install vim git tmux ffmpeg libsm6 libxext6 wget python3 python3-venv libgl1 libglib2.0-0 google-perftools -y

git clone https://github.com/huggingface/diffusers.git cd diffusers pip install -e . cd examples/dreambooth pip install -r requirements.txt accelerate config default pip install bitsandbytes xformers==0.0.19

3. Download baseline SDXL model:

wget https://civitai.com/api/download/models/333449 -O DreamShaperXL.safetensors

4. Convert `.safetensors` to suitable format using python:

import diffusers pipe = diffusers.StableDiffusionXLPipeline.from_single_file("DreamShaperXL.safetensors") pipe.save_pretrained("DreamShaperXL")

5. Train LoRA (6 images with the same woman on white background):

export MODEL_NAME="DreamShaperXL" export INSTANCE_DIR="data/claire" export MAX_TRAIN_STEPS=5000 export CHECKPOINTING_STEPS=500

export OUTPUT_DIR="outputs/$(basename ${MODELNAME})$(basename ${INSTANCE_DIR})" export CUDA_LAUNCH_BLOCKING=1 export TORCH_USE_CUDA_DSA=1

printf "\n\nTraining Claire model with $MODEL_NAME on $INSTANCE_DIR, saving to $OUTPUT_DIR\n\n"

accelerate launch diffusers/examples/dreambooth/train_dreambooth_lora_sdxl.py \ --instance_prompt="photo of wff woman, isolated on white background" \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --resolution=1024 \ --train_batch_size=2 \ --gradient_accumulation_steps=4 \ --learning_rate=1e-4 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=$MAX_TRAIN_STEPS \ --seed="0" \ --train_text_encoder \ --enable_xformers_memory_efficient_attention \ --gradient_checkpointing \ --use_8bit_adam \ --checkpointing_steps=$CHECKPOINTING_STEPS

6. Convert to Kohya format:

python /diffusers/scripts/convert_diffusers_sdxl_lora_to_webui.py outputs/DreamShaperXL_claire/pytorch_lora_weights.safetensors test.safetensors

7. Move to A1111:

mv test.safetensors stable-diffusion-webui/models/Lora/


### What should have happened?

A1111 Config:

photo of wff woman, rides gondola in Venice, Negative prompt: text, watermark, low quality, medium quality, blurry, censored, wrinkles, deformed, mutated text, watermark, low quality, medium quality, blurry, censored, wrinkles, deformed, mutated, BadDream, UnrealisticDream Steps: 7, Sampler: DPM++ SDE Karras, CFG scale: 2, Seed: 420, Size: 1024x1024, Model hash: 676f0d60c8, Model: DreamShaperXL, Version: v1.7.0

Image without any LoRA generated in A1111:
![image](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/44069677/f4d0de5d-7b16-49fd-b648-3d76e53b79af)
Image with LoRA generated in A1111:
![image](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/44069677/ed72252d-a269-4c6f-af20-6c455031ea74)
Image generated with custom code:

import torch from diffusers import DiffusionPipeline

pretrained_model = "DreamShaperXL" lora_weights = "./outputs/DreamShaperXL_claire/checkpoint-4000/"

prompt = "photo of wff woman, rides gondola in Venice," negative_prompt = "text, watermark, low quality, medium quality, blurry, censored, wrinkles, deformed, mutated text, watermark, low quality, medium quality, blurry, censored, wrinkles, deformed, mutated"

pipe = DiffusionPipeline.from_pretrained(pretrained_model, torch_dtype=torch.float32) pipe = pipe.to("cuda") pipe.load_lora_weights(lora_weights)

image = pipe( prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=50, seed=420, ).images[0]

image.save("lora_inference.png")

![image](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/44069677/e27a9836-8bab-4543-ae2d-374c4debc123)

Note:
Obiviously locally generated file does not have desired quality, but as you can see it changes the image in the way I want

### What browsers do you use to access the UI ?

Google Chrome

### Sysinfo

{ "Platform": "Linux-5.15.0-92-generic-x86_64-with-glibc2.27", "Python": "3.10.9", "Version": "v1.7.0", "Commit": "cf2772fab0af5573da775e7437e6acdca424f26e", "Script path": "/project/stable-diffusion-webui", "Data path": "/project/stable-diffusion-webui", "Extensions dir": "/project/stable-diffusion-webui/extensions", "Checksum": "aa7b5be0c49e432099adab19774a104b6f96e635dbc22617a63220b8bd765965", "Commandline": [ "launch.py", "--xformers", "--api", "--share" ], "Torch env info": { "torch_version": "2.0.1+cu118", "is_debug_build": "False", "cuda_compiled_version": "11.8", "gcc_version": "(Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0", "clang_version": null, "cmake_version": "version 3.28.1", "os": "Ubuntu 18.04.6 LTS (x86_64)", "libc_version": "glibc-2.27", "python_version": "3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] (64-bit runtime)", "python_platform": "Linux-5.15.0-92-generic-x86_64-with-glibc2.27", "is_cuda_available": "True", "cuda_runtime_version": "11.7.99", "cuda_module_loading": "LAZY", "nvidia_driver_version": "525.147.05", "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3090", "cudnn_version": [ "Probably one of the following:", "/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0", "/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0", "/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0", "/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0", "/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0", "/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0", "/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0" ], "pip_version": "pip3", "pip_packages": [ "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.3.0.post0", "torchsde==0.2.6", "torchvision==0.15.2+cu118" ], "conda_packages": [ "blas 1.0 mkl ", "ffmpeg 4.3 hf484d3e_0 pytorch", "mkl 2021.4.0 h06a4308_640 ", "mkl-service 2.4.0 py310h7f8727e_0 ", "mkl_fft 1.3.1 py310hd6ae3a3_0 ", "mkl_random 1.2.2 py310h00e6091_0 ", "numpy 1.23.5 py310hd5efca6_0 ", "numpy-base 1.23.5 py310h8e6c178_0 ", "pytorch 2.0.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch", "pytorch-cuda 11.7 h778d358_3 pytorch", "pytorch-mutex 1.0 cuda pytorch", "torchaudio 2.0.0 py310_cu117 pytorch", "torchdata 0.6.0 py310 pytorch", "torchelastic 0.2.2 pypi_0 pypi", "torchtext 0.15.0 py310 pytorch", "torchtriton 2.0.0 py310 pytorch", "torchvision 0.15.0 py310_cu117 pytorch" ], "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture: x86_64", "CPU op-mode(s): 32-bit, 64-bit", "Byte Order: Little Endian", "CPU(s): 128", "On-line CPU(s) list: 0-127", "Thread(s) per core: 2", "Core(s) per socket: 32", "Socket(s): 2", "NUMA node(s): 2", "Vendor ID: AuthenticAMD", "CPU family: 23", "Model: 49", "Model name: AMD EPYC 7452 32-Core Processor", "Stepping: 0", "CPU MHz: 1500.000", "CPU max MHz: 2350.0000", "CPU min MHz: 1500.0000", "BogoMIPS: 4700.27", "Virtualization: AMD-V", "L1d cache: 32K", "L1i cache: 32K", "L2 cache: 512K", "L3 cache: 16384K", "NUMA node0 CPU(s): 0-31,64-95", "NUMA node1 CPU(s): 32-63,96-127", "Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es" ] }, "Exceptions": [ { "exception": "A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.", "traceback": [ [ "/project/stable-diffusion-webui/modules/processing.py, line 600, decode_latent_batch", "devices.test_for_nans(sample, \"vae\")" ], [ "/project/stable-diffusion-webui/modules/devices.py, line 150, test_for_nans", "raise NansException(message)" ] ] } ], "CPU": { "model": "x86_64", "count logical": 128, "count physical": 64 }, "RAM": { "total": "252GB", "used": "38GB", "free": "23GB", "active": "102GB", "inactive": "118GB", "buffers": "1GB", "cached": "190GB", "shared": "220MB" }, "Extensions": [], "Inactive extensions": [], "Environment": { "COMMANDLINE_ARGS": "--xformers --api --share", "GIT": "git", "GRADIO_ANALYTICS_ENABLED": "False" }, "Config": { "samples_save": true, "samples_format": "png", "samples_filename_pattern": "", "save_images_add_number": true, "save_images_replace_action": "Replace", "grid_save": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "grid_prevent_empty_spots": false, "grid_zip_filename_pattern": "", "n_rows": -1, "font": "", "grid_text_active_color": "#000000", "grid_text_inactive_color": "#999999", "grid_background_color": "#ffffff", "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "jpeg_quality": 80, "webp_lossless": false, "export_for_4chan": true, "img_downscale_threshold": 4.0, "target_side_length": 4000, "img_max_size_mp": 200, "use_original_name_batch": true, "use_upscaler_name_as_suffix": false, "save_selected_only": true, "save_init_img": false, "temp_dir": "", "clean_temp_dir_at_start": false, "save_incomplete_images": false, "notification_audio": true, "notification_volume": 100, "outdir_samples": "", "outdir_txt2img_samples": "outputs/txt2img-images", "outdir_img2img_samples": "outputs/img2img-images", "outdir_extras_samples": "outputs/extras-images", "outdir_grids": "", "outdir_txt2img_grids": "outputs/txt2img-grids", "outdir_img2img_grids": "outputs/img2img-grids", "outdir_save": "log/images", "outdir_init_images": "outputs/init-images", "save_to_dirs": true, "grid_save_to_dirs": true, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "realesrgan_enabled_models": [ "R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B" ], "upscaler_for_img2img": null, "face_restoration": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "face_restoration_unload": false, "auto_launch_browser": "Local", "enable_console_prompts": false, "show_warnings": false, "show_gradio_deprecation_warnings": true, "memmon_poll_rate": 8, "samples_log_stdout": false, "multiple_tqdm": true, "print_hypernet_extra": false, "list_hidden_files": true, "disable_mmap_load_safetensors": false, "hide_ldm_prints": true, "dump_stacks_on_signal": false, "api_enable_requests": true, "api_forbid_local_requests": true, "api_useragent": "", "unload_models_when_training": false, "pin_memory": false, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 500, "training_xattention_optimizations": false, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120, "sd_model_checkpoint": "juggernautXL_v8Rundiffusion.safetensors [aeb7e9e689]", "sd_checkpoints_limit": 1, "sd_checkpoints_keep_in_cpu": true, "sd_checkpoint_cache": 0, "sd_unet": "Automatic", "enable_quantization": false, "enable_emphasis": true, "enable_batch_seeds": true, "comma_padding_backtrack": 20, "CLIP_stop_at_last_layers": 1, "upcast_attn": false, "randn_source": "GPU", "tiling": false, "hires_fix_refiner_pass": "second pass", "sdxl_crop_top": 0, "sdxl_crop_left": 0, "sdxl_refiner_low_aesthetic_score": 2.5, "sdxl_refiner_high_aesthetic_score": 6.0, "sd_vae_checkpoint_cache": 0, "sd_vae": "Automatic", "sd_vae_overrides_per_model_preferences": true, "auto_vae_precision": true, "sd_vae_encode_method": "Full", "sd_vae_decode_method": "Full", "inpainting_mask_weight": 1.0, "initial_noise_multiplier": 1.0, "img2img_extra_noise": 0.0, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "img2img_editor_height": 720, "img2img_sketch_default_brush_color": "#ffffff", "img2img_inpaint_mask_brush_color": "#ffffff", "img2img_inpaint_sketch_default_brush_color": "#ffffff", "return_mask": false, "return_mask_composite": false, "img2img_batch_show_results_limit": 32, "cross_attention_optimization": "Automatic", "s_min_uncond": 0.0, "token_merging_ratio": 0.0, "token_merging_ratio_img2img": 0.0, "token_merging_ratio_hr": 0.0, "pad_cond_uncond": false, "persistent_cond_cache": true, "batch_cond_uncond": true, "use_old_emphasis_implementation": false, "use_old_karras_scheduler_sigmas": false, "no_dpmpp_sde_batch_determinism": false, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "hires_fix_use_firstpass_conds": false, "use_old_scheduling": false, "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": false, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 24, "interrogate_clip_max_length": 48, "interrogate_clip_dict_limit": 1500, "interrogate_clip_skip_categories": [], "interrogate_deepbooru_score_threshold": 0.5, "deepbooru_sort_alpha": true, "deepbooru_use_spaces": true, "deepbooru_escape": true, "deepbooru_filter_tags": "", "extra_networks_show_hidden_directories": true, "extra_networks_dir_button_function": false, "extra_networks_hidden_models": "When searched", "extra_networks_default_multiplier": 1.0, "extra_networks_card_width": 0, "extra_networks_card_height": 0, "extra_networks_card_text_scale": 1.0, "extra_networks_card_show_desc": true, "extra_networks_card_order_field": "Path", "extra_networks_card_order": "Ascending", "extra_networks_add_text_separator": " ", "ui_extra_networks_tab_reorder": "", "textual_inversion_print_at_load": false, "textual_inversion_add_hashes_to_infotext": true, "sd_hypernetwork": "None", "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\/!?%^*;:{}=`~() ", "keyedit_delimiters_whitespace": [ "Tab", "Carriage Return", "Line Feed" ], "keyedit_move": true, "disable_token_counters": false, "return_grid": true, "do_not_show_images": false, "js_modal_lightbox": true, "js_modal_lightbox_initially_zoomed": true, "js_modal_lightbox_gamepad": false, "js_modal_lightbox_gamepad_repeat": 250, "gallery_height": "", "compact_prompt_box": false, "samplers_in_dropdown": true, "dimensions_and_batch_together": true, "sd_checkpoint_dropdown_use_short": false, "hires_fix_show_sampler": false, "hires_fix_show_prompts": false, "txt2img_settings_accordion": false, "img2img_settings_accordion": false, "localization": "None", "quicksettings_list": [ "sd_model_checkpoint" ], "ui_tab_order": [], "hidden_tabs": [], "ui_reorder_list": [], "gradio_theme": "Default", "gradio_themes_cache": true, "show_progress_in_title": true, "send_seed": true, "send_size": true, "enable_pnginfo": true, "save_txt": false, "add_model_name_to_info": true, "add_model_hash_to_info": true, "add_vae_name_to_info": true, "add_vae_hash_to_info": true, "add_user_name_to_info": false, "add_version_to_infotext": true, "disable_weights_auto_swap": true, "infotext_skip_pasting": [], "infotext_styles": "Apply if any", "show_progressbar": true, "live_previews_enable": true, "live_previews_image_format": "png", "show_progress_grid": true, "show_progress_every_n_steps": 10, "show_progress_type": "Approx NN", "live_preview_allow_lowvram_full": false, "live_preview_content": "Prompt", "live_preview_refresh_period": 1000, "live_preview_fast_interrupt": false, "js_live_preview_in_modal_lightbox": false, "hide_samplers": [], "eta_ddim": 0.0, "eta_ancestral": 1.0, "ddim_discretize": "uniform", "s_churn": 0.0, "s_tmin": 0.0, "s_tmax": 0.0, "s_noise": 1.0, "k_sched_type": "Automatic", "sigma_min": 0.0, "sigma_max": 0.0, "rho": 0.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "sgm_noise_multiplier": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "postprocessing_enable_in_main_ui": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "postprocessing_existing_caption_action": "Ignore", "disabled_extensions": [], "disable_all_extensions": "none", "restore_config_state_file": "", "sd_checkpoint_hash": "aeb7e9e6897a1e58b10494bd989d001e3d4bc9b634633cd7b559838f612c2867", "ldsr_steps": 100, "ldsr_cached": false, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "SWIN_torch_compile": false, "hypertile_enable_unet": false, "hypertile_enable_unet_secondpass": false, "hypertile_max_depth_unet": 3, "hypertile_max_tile_unet": 256, "hypertile_swap_size_unet": 3, "hypertile_enable_vae": false, "hypertile_max_depth_vae": 3, "hypertile_max_tile_vae": 128, "hypertile_swap_size_vae": 3 }, "Startup": { "total": 20.302467823028564, "records": { "initial startup": 0.03315138816833496, "prepare environment/checks": 5.1975250244140625e-05, "prepare environment/git version info": 0.032486677169799805, "prepare environment/torch GPU test": 3.260802745819092, "prepare environment/clone repositores": 0.06354904174804688, "prepare environment/run extensions installers": 0.0026493072509765625, "prepare environment": 3.4426512718200684, "launcher": 0.004006862640380859, "import torch": 5.411977052688599, "import gradio": 1.9667365550994873, "setup paths": 2.7428033351898193, "import ldm": 0.014272928237915039, "import sgm": 3.337860107421875e-06, "initialize shared": 0.3572075366973877, "other imports": 1.4099717140197754, "opts onchange": 0.0007200241088867188, "setup SD model": 0.008719682693481445, "setup codeformer": 0.3021426200866699, "setup gfpgan": 0.07605552673339844, "set samplers": 6.818771362304688e-05, "list extensions": 0.0020494461059570312, "restore config state file": 1.5020370483398438e-05, "list SD models": 0.006711006164550781, "list localizations": 0.0013132095336914062, "load scripts/custom_code.py": 0.01895427703857422, "load scripts/img2imgalt.py": 0.0010876655578613281, "load scripts/loopback.py": 0.0010650157928466797, "load scripts/outpainting_mk_2.py": 0.00091552734375, "load scripts/poor_mans_outpainting.py": 0.0008475780487060547, "load scripts/postprocessing_caption.py": 0.0010061264038085938, "load scripts/postprocessing_codeformer.py": 0.001004934310913086, "load scripts/postprocessing_create_flipped_copies.py": 0.0010581016540527344, "load scripts/postprocessing_focal_crop.py": 0.002343893051147461, "load scripts/postprocessing_gfpgan.py": 0.0010254383087158203, "load scripts/postprocessing_split_oversized.py": 0.0010039806365966797, "load scripts/postprocessing_upscale.py": 0.0009045600891113281, "load scripts/processing_autosized_crop.py": 0.0010364055633544922, "load scripts/prompt_matrix.py": 0.0008819103240966797, "load scripts/prompts_from_file.py": 0.0008723735809326172, "load scripts/sd_upscale.py": 0.0010349750518798828, "load scripts/xyz_grid.py": 0.0033254623413085938, "load scripts/ldsr_model.py": 0.3538815975189209, "load scripts/lora_script.py": 0.35898399353027344, "load scripts/scunet_model.py": 0.035782575607299805, "load scripts/swinir_model.py": 0.03134512901306152, "load scripts/hotkey_config.py": 0.0013935565948486328, "load scripts/extra_options_section.py": 0.0011320114135742188, "load scripts/hypertile_script.py": 0.05696606636047363, "load scripts/hypertile_xyz.py": 0.0006480216979980469, "load scripts/refiner.py": 0.001291513442993164, "load scripts/seed.py": 0.0007946491241455078, "load scripts": 0.8806211948394775, "load upscalers": 0.022945165634155273, "refresh VAE": 0.004582643508911133, "refresh textual inversion templates": 0.0005865097045898438, "scripts list_optimizers": 0.00032258033752441406, "scripts list_unets": 6.67572021484375e-06, "reload hypernetworks": 0.013101816177368164, "initialize extra networks": 0.03117680549621582, "scripts before_ui_callback": 0.0022182464599609375, "create ui": 0.4557759761810303, "gradio launch": 3.108870029449463, "add APIs": 0.08563518524169922, "app_started_callback/lora_script.py": 0.0004737377166748047, "app_started_callback": 0.0004775524139404297 } }, "Packages": [ "absl-py==2.1.0", "accelerate==0.21.0", "addict==2.4.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohttp==3.9.3", "aiosignal==1.3.1", "altair==5.2.0", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "async-timeout==4.0.3", "attrs==23.2.0", "basicsr==1.4.2", "beautifulsoup4==4.12.3", "blendmodes==2022", "certifi==2024.2.2", "charset-normalizer==3.3.2", "clean-fid==0.1.35", "click==8.1.7", "clip==1.0", "cmake==3.28.1", "contourpy==1.2.0", "cycler==0.12.1", "deprecation==2.1.0", "einops==0.4.1", "exceptiongroup==1.2.0", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.3.1", "filelock==3.13.1", "filterpy==1.4.5", "fonttools==4.48.1", "frozenlist==1.4.1", "fsspec==2024.2.0", "ftfy==6.1.3", "future==0.18.3", "gdown==5.1.0", "gfpgan==1.3.8", "gitdb==4.0.11", "gitpython==3.1.32", "gradio-client==0.5.0", "gradio==3.41.2", "grpcio==1.60.1", "h11==0.12.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.20.3", "idna==3.6", "imageio==2.33.1", "importlib-metadata==7.0.1", "importlib-resources==6.1.1", "inflection==0.5.1", "jinja2==3.1.3", "jsonmerge==1.8.0", "jsonschema-specifications==2023.12.1", "jsonschema==4.21.1", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy-loader==0.3", "lightning-utilities==0.10.1", "lit==17.0.6", "llvmlite==0.42.0", "lmdb==1.4.1", "lpips==0.1.4", "markdown==3.5.2", "markupsafe==2.1.5", "matplotlib==3.8.2", "mpmath==1.3.0", "multidict==6.0.5", "networkx==3.2.1", "numba==0.59.0", "numpy==1.23.5", "omegaconf==2.2.3", "open-clip-torch==2.20.0", "opencv-python==4.9.0.80", "orjson==3.9.13", "packaging==23.2", "pandas==2.2.0", "piexif==1.1.3", "pillow==9.5.0", "pip==22.3.1", "platformdirs==4.2.0", "protobuf==3.20.0", "psutil==5.9.5", "pydantic==1.10.14", "pydub==0.25.1", "pyparsing==3.1.1", "pysocks==1.7.1", "python-dateutil==2.8.2", "python-multipart==0.0.7", "pytorch-lightning==1.9.4", "pytz==2024.1", "pywavelets==1.5.0", "pyyaml==6.0.1", "realesrgan==0.3.0", "referencing==0.33.0", "regex==2023.12.25", "requests==2.31.0", "resize-right==0.0.2", "rpds-py==0.17.1", "safetensors==0.3.1", "scikit-image==0.21.0", "scipy==1.12.0", "semantic-version==2.10.0", "sentencepiece==0.1.99", "setuptools==65.5.0", "six==1.16.0", "smmap==5.0.1", "sniffio==1.3.0", "soupsieve==2.5", "starlette==0.26.1", "sympy==1.12", "tb-nightly==2.16.0a20240206", "tensorboard-data-server==0.7.2", "tf-keras-nightly==2.16.0.dev2024020610", "tifffile==2024.1.30", "timm==0.9.2", "tokenizers==0.13.3", "tomesd==0.1.3", "tomli==2.0.1", "toolz==0.12.1", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.3.0.post0", "torchsde==0.2.6", "torchvision==0.15.2+cu118", "tqdm==4.66.1", "trampoline==0.1.2", "transformers==4.30.2", "triton==2.0.0", "typing-extensions==4.9.0", "tzdata==2023.4", "urllib3==2.2.0", "uvicorn==0.27.0.post1", "wcwidth==0.2.13", "websockets==11.0.3", "werkzeug==3.0.1", "xformers==0.0.20", "yapf==0.40.2", "yarl==1.9.4", "zipp==3.17.0" ] }


### Console logs

```Shell
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on root user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Using TCMalloc: libtcmalloc_minimal.so.4
Python 3.10.9 (main, Mar  8 2023, 10:47:38) [GCC 11.2.0]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
Launching Web UI with arguments: --xformers --api --share
Style database not found: /project/stable-diffusion-webui/styles.csv
Loading weights [aeb7e9e689] from /project/stable-diffusion-webui/models/Stable-diffusion/juggernautXL_v8Rundiffusion.safetensors
Running on local URL:  http://127.0.0.1:7860
Creating model from config: /project/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Applying attention optimization: xformers... done.
Model loaded in 6.1s (load weights from disk: 1.9s, create model: 0.7s, apply weights to model: 3.1s, apply half(): 0.1s, calculate empty prompt: 0.1s).
Running on public URL: https://90a558500b31edaa81.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Startup time: 29.8s (prepare environment: 4.1s, import torch: 6.2s, import gradio: 2.0s, setup paths: 3.5s, initialize shared: 0.4s, other imports: 1.1s, setup codeformer: 0.3s, setup gfpgan: 0.4s, load scripts: 0.8s, create ui: 0.9s, gradio launch: 9.9s, add APIs: 0.1s).
Reusing loaded model juggernautXL_v8Rundiffusion.safetensors [aeb7e9e689] to load DreamShaperXL.safetensors [676f0d60c8]
Loading weights [676f0d60c8] from /project/stable-diffusion-webui/models/Stable-diffusion/DreamShaperXL.safetensors
Applying attention optimization: xformers... done.
Weights loaded in 27.0s (send model to cpu: 2.8s, load weights from disk: 0.9s, apply weights to model: 12.7s, move model to device: 10.6s).
Calculating sha256 for /project/stable-diffusion-webui/models/Lora/test.safetensors: d90adf86741514ae25e56c370cca19f51cf547cf5087ce8ce921d33e1df507f0
100%|____________________________________________________________________________________________________________________________________________________________________| 7/7 [00:04<00:00,  1.43it/s]
==========================================================================================_______________________________________________________________________________| 7/7 [00:03<00:00,  1.92it/s]
A tensor with all NaNs was produced in VAE.
Web UI will now convert VAE into 32-bit float and retry.
To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
To always start with 32-bit VAE, use --no-half-vae commandline flag.
==========================================================================================
Total progress: 100%|____________________________________________________________________________________________________________________________________________________| 7/7 [00:04<00:00,  1.55it/s]
100%|____________________________________________________________________________________________________________________________________________________________________| 7/7 [00:03<00:00,  1.93it/s]
Total progress: 100%|____________________________________________________________________________________________________________________________________________________| 7/7 [00:04<00:00,  1.65it/s]
100%|____________________________________________________________________________________________________________________________________________________________________| 7/7 [00:03<00:00,  1.92it/s]
Total progress: 100%|____________________________________________________________________________________________________________________________________________________| 7/7 [00:04<00:00,  1.65it/s]
Total progress: 100%|____________________________________________________________________________________________________________________________________________________| 7/7 [00:04<00:00,  1.90it/s]

Additional information

I used exactly the same pipline two weeks ago and everything worked fine, I saw this issue from a week ago, maybe the same has to be applied in A1111? https://github.com/huggingface/diffusers/issues/6777

dain5832 commented 4 months ago

I'm struggling with the same error

purvag2003 commented 3 months ago

exact same issue running on auto1111 on runpod.

GodOfSmallThings commented 1 month ago

I'm facing the same problem. Did anybody solved it?