AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
140.61k stars 26.61k forks source link

[Bug]: Seems like img2img does not take the image I uploaded. #14615

Open bgrmwbys opened 8 months ago

bgrmwbys commented 8 months ago

Checklist

What happened?

Seems like img2img does not take the image I uploaded and just generates whet I prompted. For example a picture of an astronaut on a Llama is the picture while the prompt is add flowers.

Steps to reproduce the problem

  1. Go to img2img
  2. Upload an existing image (in this case "an astronaut on a Llama")
  3. In the prompt type "add flowers"
  4. Click Generate.

What should have happened?

It should have added flowers in the background of the exisiting photo

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

{ "Platform": "Windows-10-10.0.22621-SP0", "Python": "3.10.6", "Version": "v1.7.0", "Commit": "cf2772fab0af5573da775e7437e6acdca424f26e", "Script path": "C:\Users\user\Downloads\sd.webui\webui", "Data path": "C:\Users\user\Downloads\sd.webui\webui", "Extensions dir": "C:\Users\user\Downloads\sd.webui\webui\extensions", "Checksum": "5eaba2e0599f50d2be4671f4db2849df158c7d761168d3e01afe49c35d0897d8", "Commandline": [ "launch.py" ], "Torch env info": { "torch_version": "2.0.1+cu118", "is_debug_build": "False", "cuda_compiled_version": "11.8", "gcc_version": null, "clang_version": null, "cmake_version": null, "os": "Microsoft Windows 11 Pro", "libc_version": "N/A", "python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)", "python_platform": "Windows-10-10.0.22621-SP0", "is_cuda_available": "True", "cuda_runtime_version": null, "cuda_module_loading": "LAZY", "nvidia_driver_version": "537.42", "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3080", "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.1", "torchsde==0.2.6", "torchvision==0.15.2+cu118" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture=9", "CurrentClockSpeed=3801", "DeviceID=CPU0", "Family=198", "L2CacheSize=2048", "L2CacheSpeed=", "Manufacturer=GenuineIntel", "MaxClockSpeed=3801", "Name=Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz", "ProcessorType=3", "Revision=" ] }, "Exceptions": [ { "exception": "A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the \"Upcast cross attention layer to float32\" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.", "traceback": [ [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 57, f", "res = list(func(*args, kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 36, f", "res = func(*args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\txt2img.py, line 55, txt2img", "processed = processing.process_images(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 734, process_images", "res = process_images_inner(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 868, process_images_inner", "samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 1142, sample", "samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, sample", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_common.py, line 261, launch_sampling", "return func()" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, ", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, *extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py, line 115, decorate_context", "return func(args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\repositories\k-diffusion\k_diffusion\sampling.py, line 594, sample_dpmpp_2m", "denoised = model(x, sigmas[i] * s_in, extra_args)" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_cfg_denoiser.py, line 201, forward", "devices.test_for_nans(x_out, \"unet\")" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\devices.py, line 150, test_for_nans", "raise NansException(message)" ] ] }, { "exception": "A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the \"Upcast cross attention layer to float32\" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.", "traceback": [ [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 57, f", "res = list(func(args, kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 36, f", "res = func(*args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\txt2img.py, line 55, txt2img", "processed = processing.process_images(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 734, process_images", "res = process_images_inner(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 868, process_images_inner", "samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 1142, sample", "samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, sample", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_common.py, line 261, launch_sampling", "return func()" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, ", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py, line 115, decorate_context", "return func(*args, *kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\repositories\k-diffusion\k_diffusion\sampling.py, line 594, sample_dpmpp_2m", "denoised = model(x, sigmas[i] s_in, extra_args)" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_cfg_denoiser.py, line 201, forward", "devices.test_for_nans(x_out, \"unet\")" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\devices.py, line 150, test_for_nans", "raise NansException(message)" ] ] }, { "exception": "A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the \"Upcast cross attention layer to float32\" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.", "traceback": [ [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 57, f", "res = list(func(*args, *kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 36, f", "res = func(args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\txt2img.py, line 55, txt2img", "processed = processing.process_images(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 734, process_images", "res = process_images_inner(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 868, process_images_inner", "samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 1142, sample", "samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, sample", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_common.py, line 261, launch_sampling", "return func()" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, ", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py, line 115, decorate_context", "return func(*args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\repositories\k-diffusion\k_diffusion\sampling.py, line 594, sample_dpmpp_2m", "denoised = model(x, sigmas[i] * s_in, *extra_args)" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_cfg_denoiser.py, line 201, forward", "devices.test_for_nans(x_out, \"unet\")" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\devices.py, line 150, test_for_nans", "raise NansException(message)" ] ] }, { "exception": "A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the \"Upcast cross attention layer to float32\" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.", "traceback": [ [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 57, f", "res = list(func(*args, kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 36, f", "res = func(*args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\txt2img.py, line 55, txt2img", "processed = processing.process_images(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 734, process_images", "res = process_images_inner(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 868, process_images_inner", "samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 1142, sample", "samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, sample", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_common.py, line 261, launch_sampling", "return func()" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, ", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, *extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py, line 115, decorate_context", "return func(args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\repositories\k-diffusion\k_diffusion\sampling.py, line 594, sample_dpmpp_2m", "denoised = model(x, sigmas[i] * s_in, extra_args)" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_cfg_denoiser.py, line 201, forward", "devices.test_for_nans(x_out, \"unet\")" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\devices.py, line 150, test_for_nans", "raise NansException(message)" ] ] }, { "exception": "A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the \"Upcast cross attention layer to float32\" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.", "traceback": [ [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 57, f", "res = list(func(args, kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\call_queue.py, line 36, f", "res = func(*args, kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\txt2img.py, line 55, txt2img", "processed = processing.process_images(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 734, process_images", "res = process_images_inner(p)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 868, process_images_inner", "samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\processing.py, line 1142, sample", "samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, sample", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_common.py, line 261, launch_sampling", "return func()" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_kdiffusion.py, line 235, ", "samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py, line 115, decorate_context", "return func(*args, *kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\repositories\k-diffusion\k_diffusion\sampling.py, line 594, sample_dpmpp_2m", "denoised = model(x, sigmas[i] s_in, extra_args)" ], [ "C:\Users\user\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl", "return forward_call(*args, *kwargs)" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\sd_samplers_cfg_denoiser.py, line 201, forward", "devices.test_for_nans(x_out, \"unet\")" ], [ "C:\Users\user\Downloads\sd.webui\webui\modules\devices.py, line 150, test_for_nans", "raise NansException(message)" ] ] } ], "CPU": { "model": "Intel64 Family 6 Model 165 Stepping 5, GenuineIntel", "count logical": 16, "count physical": 8 }, "RAM": { "total": "32GB", "used": "18GB", "free": "14GB" }, "Extensions": [], "Inactive extensions": [], "Environment": { "GRADIO_ANALYTICS_ENABLED": "False" }, "Config": { "samples_save": true, "samples_format": "png", "samples_filename_pattern": "", "save_images_add_number": true, "save_images_replace_action": "Replace", "grid_save": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "grid_prevent_empty_spots": false, "grid_zip_filename_pattern": "", "n_rows": -1, "font": "", "grid_text_active_color": "#000000", "grid_text_inactive_color": "#999999", "grid_background_color": "#ffffff", "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "jpeg_quality": 80, "webp_lossless": false, "export_for_4chan": true, "img_downscale_threshold": 4.0, "target_side_length": 4000, "img_max_size_mp": 200, "use_original_name_batch": true, "use_upscaler_name_as_suffix": false, "save_selected_only": true, "save_init_img": false, "temp_dir": "", "clean_temp_dir_at_start": false, "save_incomplete_images": false, "notification_audio": true, "notification_volume": 100, "outdir_samples": "", "outdir_txt2img_samples": "outputs/txt2img-images", "outdir_img2img_samples": "outputs/img2img-images", "outdir_extras_samples": "outputs/extras-images", "outdir_grids": "", "outdir_txt2img_grids": "outputs/txt2img-grids", "outdir_img2img_grids": "outputs/img2img-grids", "outdir_save": "log/images", "outdir_init_images": "outputs/init-images", "save_to_dirs": true, "grid_save_to_dirs": true, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "realesrgan_enabled_models": [ "R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B" ], "upscaler_for_img2img": null, "face_restoration": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "face_restoration_unload": false, "auto_launch_browser": "Local", "enable_console_prompts": false, "show_warnings": false, "show_gradio_deprecation_warnings": true, "memmon_poll_rate": 8, "samples_log_stdout": false, "multiple_tqdm": true, "print_hypernet_extra": false, "list_hidden_files": true, "disable_mmap_load_safetensors": false, "hide_ldm_prints": true, "dump_stacks_on_signal": false, "api_enable_requests": true, "api_forbid_local_requests": true, "api_useragent": "", "unload_models_when_training": false, "pin_memory": false, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 500, "training_xattention_optimizations": false, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120, "sd_model_checkpoint": "cyberrealistic_v41BackToBasics.safetensors [41b6846108]", "sd_checkpoints_limit": 1, "sd_checkpoints_keep_in_cpu": true, "sd_checkpoint_cache": 0, "sd_unet": "Automatic", "enable_quantization": false, "enable_emphasis": true, "enable_batch_seeds": true, "comma_padding_backtrack": 20, "CLIP_stop_at_last_layers": 1, "upcast_attn": true, "randn_source": "GPU", "tiling": false, "hires_fix_refiner_pass": "first pass", "sdxl_crop_top": 0, "sdxl_crop_left": 0, "sdxl_refiner_low_aesthetic_score": 2.5, "sdxl_refiner_high_aesthetic_score": 6.0, "sd_vae_checkpoint_cache": 0, "sd_vae": "Automatic", "sd_vae_overrides_per_model_preferences": true, "auto_vae_precision": true, "sd_vae_encode_method": "Full", "sd_vae_decode_method": "Full", "inpainting_mask_weight": 1.0, "initial_noise_multiplier": 1.0, "img2img_extra_noise": 0.0, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "img2img_editor_height": 720, "img2img_sketch_default_brush_color": "#ffffff", "img2img_inpaint_mask_brush_color": "#ffffff", "img2img_inpaint_sketch_default_brush_color": "#ffffff", "return_mask": false, "return_mask_composite": false, "img2img_batch_show_results_limit": 32, "cross_attention_optimization": "Automatic", "s_min_uncond": 0.0, "token_merging_ratio": 0.0, "token_merging_ratio_img2img": 0.0, "token_merging_ratio_hr": 0.0, "pad_cond_uncond": false, "persistent_cond_cache": true, "batch_cond_uncond": true, "use_old_emphasis_implementation": false, "use_old_karras_scheduler_sigmas": false, "no_dpmpp_sde_batch_determinism": false, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "hires_fix_use_firstpass_conds": false, "use_old_scheduling": false, "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": false, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 24, "interrogate_clip_max_length": 48, "interrogate_clip_dict_limit": 1500, "interrogate_clip_skip_categories": [], "interrogate_deepbooru_score_threshold": 0.5, "deepbooru_sort_alpha": true, "deepbooru_use_spaces": true, "deepbooru_escape": true, "deepbooru_filter_tags": "", "extra_networks_show_hidden_directories": true, "extra_networks_dir_button_function": false, "extra_networks_hidden_models": "When searched", "extra_networks_default_multiplier": 1.0, "extra_networks_card_width": 0, "extra_networks_card_height": 0, "extra_networks_card_text_scale": 1.0, "extra_networks_card_show_desc": true, "extra_networks_card_order_field": "Path", "extra_networks_card_order": "Ascending", "extra_networks_add_text_separator": " ", "ui_extra_networks_tab_reorder": "", "textual_inversion_print_at_load": false, "textual_inversion_add_hashes_to_infotext": true, "sd_hypernetwork": "None", "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\/!?%^;:{}=`~() ", "keyedit_delimiters_whitespace": [ "Tab", "Carriage Return", "Line Feed" ], "keyedit_move": true, "disable_token_counters": false, "return_grid": true, "do_not_show_images": false, "js_modal_lightbox": true, "js_modal_lightbox_initially_zoomed": true, "js_modal_lightbox_gamepad": false, "js_modal_lightbox_gamepad_repeat": 250, "gallery_height": "", "compact_prompt_box": false, "samplers_in_dropdown": true, "dimensions_and_batch_together": true, "sd_checkpoint_dropdown_use_short": false, "hires_fix_show_sampler": false, "hires_fix_show_prompts": false, "txt2img_settings_accordion": false, "img2img_settings_accordion": false, "localization": "None", "quicksettings_list": [ "sd_model_checkpoint" ], "ui_tab_order": [], "hidden_tabs": [], "ui_reorder_list": [], "gradio_theme": "Default", "gradio_themes_cache": true, "show_progress_in_title": true, "send_seed": true, "send_size": true, "enable_pnginfo": true, "save_txt": false, "add_model_name_to_info": true, "add_model_hash_to_info": true, "add_vae_name_to_info": true, "add_vae_hash_to_info": true, "add_user_name_to_info": false, "add_version_to_infotext": true, "disable_weights_auto_swap": true, "infotext_skip_pasting": [], "infotext_styles": "Apply if any", "show_progressbar": true, "live_previews_enable": true, "live_previews_image_format": "png", "show_progress_grid": true, "show_progress_every_n_steps": 10, "show_progress_type": "Approx NN", "live_preview_allow_lowvram_full": false, "live_preview_content": "Prompt", "live_preview_refresh_period": 1000, "live_preview_fast_interrupt": false, "js_live_preview_in_modal_lightbox": false, "hide_samplers": [], "eta_ddim": 0.0, "eta_ancestral": 1.0, "ddim_discretize": "uniform", "s_churn": 0.0, "s_tmin": 0.0, "s_tmax": 0.0, "s_noise": 1.0, "k_sched_type": "Automatic", "sigma_min": 0.0, "sigma_max": 0.0, "rho": 0.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "sgm_noise_multiplier": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "postprocessing_enable_in_main_ui": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "postprocessing_existing_caption_action": "Ignore", "disabled_extensions": [], "disable_all_extensions": "none", "restore_config_state_file": "", "sd_checkpoint_hash": "41b6846108bfa99783b58b68f9e89b5c398e78304fbb129e83e1a4a5d39f5c5c", "ldsr_steps": 100, "ldsr_cached": false, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "hypertile_enable_unet": false, "hypertile_enable_unet_secondpass": false, "hypertile_max_depth_unet": 3, "hypertile_max_tile_unet": 256, "hypertile_swap_size_unet": 3, "hypertile_enable_vae": false, "hypertile_max_depth_vae": 3, "hypertile_max_tile_vae": 128, "hypertile_swap_size_vae": 3, "lora_functional": false, "sd_lora": "None", "lora_preferred_name": "Alias from file", "lora_add_hashes_to_infotext": true, "lora_show_all": false, "lora_hide_unknown_for_versions": [], "lora_in_memory_limit": 0, "extra_options_txt2img": [], "extra_options_img2img": [], "extra_options_cols": 1, "extra_options_accordion": false, "canvas_hotkey_zoom": "Alt", "canvas_hotkey_adjust": "Ctrl", "canvas_hotkey_move": "F", "canvas_hotkey_fullscreen": "S", "canvas_hotkey_reset": "R", "canvas_hotkey_overlap": "O", "canvas_show_tooltip": true, "canvas_auto_expand": true, "canvas_blur_prompt": false, "canvas_disabled_functions": [ "Overlap" ] }, "Startup": { "total": 0.6985111236572266, "records": { "app reload callback": 0.0, "scripts unloaded callback": 0.0, "set samplers": 0.00010037422180175781, "list extensions": 0.0010409355163574219, "restore config state file": 0.0, "list SD models": 0.0019979476928710938, "list localizations": 0.0, "load scripts/custom_code.py": 0.006000518798828125, "load scripts/img2imgalt.py": 0.0014386177062988281, "load scripts/loopback.py": 0.001046895980834961, "load scripts/outpainting_mk_2.py": 0.0008502006530761719, "load scripts/poor_mans_outpainting.py": 0.001092672348022461, "load scripts/postprocessing_caption.py": 0.0, "load scripts/postprocessing_codeformer.py": 0.0009999275207519531, "load scripts/postprocessing_create_flipped_copies.py": 0.0010008811950683594, "load scripts/postprocessing_focal_crop.py": 0.0, "load scripts/postprocessing_gfpgan.py": 0.00118255615234375, "load scripts/postprocessing_split_oversized.py": 0.0, "load scripts/postprocessing_upscale.py": 0.0010259151458740234, "load scripts/processing_autosized_crop.py": 0.0009996891021728516, "load scripts/prompt_matrix.py": 0.0, "load scripts/prompts_from_file.py": 0.0010008811950683594, "load scripts/sd_upscale.py": 0.0, "load scripts/xyz_grid.py": 0.0019986629486083984, "load scripts/ldsr_model.py": 0.03842329978942871, "load scripts/lora_script.py": 0.10655760765075684, "load scripts/scunet_model.py": 0.019646167755126953, "load scripts/swinir_model.py": 0.01882171630859375, "load scripts/hotkey_config.py": 0.0009012222290039062, "load scripts/extra_options_section.py": 0.0006880760192871094, "load scripts/hypertile_script.py": 0.026415348052978516, "load scripts/hypertile_xyz.py": 0.0, "load scripts/refiner.py": 0.0, "load scripts/seed.py": 0.0, "load scripts": 0.23009085655212402, "reload script modules": 0.0627741813659668, "load upscalers": 0.0, "refresh VAE": 0.0, "refresh textual inversion templates": 0.0, "scripts list_optimizers": 0.0, "scripts list_unets": 0.0, "reload hypernetworks": 0.0, "initialize extra networks": 0.0, "scripts before_ui_callback": 0.0, "create ui": 0.3083672523498535, "gradio launch": 0.07850956916809082, "add APIs": 0.015630006790161133, "app_started_callback/lora_script.py": 0.0, "app_started_callback": 0.0 } }, "Packages": [ "absl-py==2.0.0", "accelerate==0.21.0", "addict==2.4.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohttp==3.9.1", "aiosignal==1.3.1", "altair==5.2.0", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "async-timeout==4.0.3", "attrs==23.2.0", "basicsr==1.4.2", "beautifulsoup4==4.12.2", "blendmodes==2022", "certifi==2023.11.17", "charset-normalizer==3.3.2", "clean-fid==0.1.35", "click==8.1.7", "clip==1.0", "colorama==0.4.6", "contourpy==1.2.0", "cycler==0.12.1", "deprecation==2.1.0", "einops==0.4.1", "exceptiongroup==1.2.0", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.3.1", "filelock==3.13.1", "filterpy==1.4.5", "fonttools==4.47.0", "frozenlist==1.4.1", "fsspec==2023.12.2", "ftfy==6.1.3", "future==0.18.3", "gdown==4.7.1", "gfpgan==1.3.8", "gitdb==4.0.11", "gitpython==3.1.32", "gradio-client==0.5.0", "gradio==3.41.2", "grpcio==1.60.0", "h11==0.12.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.20.2", "idna==3.6", "imageio==2.33.1", "importlib-metadata==7.0.1", "importlib-resources==6.1.1", "inflection==0.5.1", "jinja2==3.1.2", "jsonmerge==1.8.0", "jsonschema-specifications==2023.12.1", "jsonschema==4.20.0", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy-loader==0.3", "lightning-utilities==0.10.0", "llvmlite==0.41.1", "lmdb==1.4.1", "lpips==0.1.4", "markdown==3.5.1", "markupsafe==2.1.3", "matplotlib==3.8.2", "mpmath==1.3.0", "multidict==6.0.4", "networkx==3.2.1", "numba==0.58.1", "numpy==1.23.5", "omegaconf==2.2.3", "open-clip-torch==2.20.0", "opencv-python==4.9.0.80", "orjson==3.9.10", "packaging==23.2", "pandas==2.1.4", "piexif==1.1.3", "pillow==9.5.0", "pip==23.3.2", "platformdirs==4.1.0", "protobuf==3.20.0", "psutil==5.9.5", "pydantic==1.10.13", "pydub==0.25.1", "pyparsing==3.1.1", "pysocks==1.7.1", "python-dateutil==2.8.2", "python-multipart==0.0.6", "pytorch-lightning==1.9.4", "pytz==2023.3.post1", "pywavelets==1.5.0", "pyyaml==6.0.1", "realesrgan==0.3.0", "referencing==0.32.1", "regex==2023.12.25", "requests==2.31.0", "resize-right==0.0.2", "rpds-py==0.16.2", "safetensors==0.3.1", "scikit-image==0.21.0", "scipy==1.11.4", "semantic-version==2.10.0", "sentencepiece==0.1.99", "setuptools==69.0.3", "six==1.16.0", "smmap==5.0.1", "sniffio==1.3.0", "soupsieve==2.5", "starlette==0.26.1", "sympy==1.12", "tb-nightly==2.16.0a20240109", "tensorboard-data-server==0.7.2", "tf-keras-nightly==2.16.0.dev2024010910", "tifffile==2023.12.9", "timm==0.9.2", "tokenizers==0.13.3", "tomesd==0.1.3", "tomli==2.0.1", "toolz==0.12.0", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.1", "torchsde==0.2.6", "torchvision==0.15.2+cu118", "tqdm==4.66.1", "trampoline==0.1.2", "transformers==4.30.2", "typing-extensions==4.9.0", "tzdata==2023.4", "urllib3==2.1.0", "uvicorn==0.25.0", "wcwidth==0.2.13", "websockets==11.0.3", "werkzeug==3.0.1", "wheel==0.42.0", "yapf==0.40.2", "yarl==1.9.4", "zipp==3.17.0" ] }

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Style database not found: C:\Users\user\Downloads\sd.webui\webui\styles.csv
Loading weights [41b6846108] from C:\Users\user\Downloads\sd.webui\webui\models\Stable-diffusion\cyberrealistic_v41BackToBasics.safetensors
Running on local URL:  http://127.0.0.1:7860
Creating model from config: C:\Users\user\Downloads\sd.webui\webui\configs\v1-inference.yaml

To create a public link, set `share=True` in `launch()`.
Startup time: 18.3s (prepare environment: 4.7s, import torch: 4.6s, import gradio: 2.1s, setup paths: 2.4s, initialize shared: 0.2s, other imports: 1.7s, setup codeformer: 0.2s, load scripts: 1.3s, create ui: 0.6s, gradio launch: 0.4s).
Applying attention optimization: Doggettx... done.
Model loaded in 5.6s (load weights from disk: 0.8s, create model: 0.4s, apply weights to model: 1.4s, load textual inversion embeddings: 1.8s, calculate empty prompt: 1.1s).
100%|███████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:02<00:00,  7.71it/s]
Total progress: 100%|███████████████████████████████████████████████████████████████████████████████| 16/16 [00:01<00:00,  8.56it/s]
Total progress: 100%|███████████████████████████████████████████████████████████████████████████████| 16/16 [00:01<00:00,  8.95it/s]

Additional information

No response

acherry commented 8 months ago

Are you using img2img at the default 75% denoising strength? If so, setting it somewhere around 0.25 may replace less of your input image.