hako-mikan / sd-webui-regional-prompter

set prompt to divided region
GNU Affero General Public License v3.0
1.55k stars 128 forks source link

Extension Causes Vladomatic (A1111 Fork) To Crash #145

Closed fcabanski closed 1 year ago

fcabanski commented 1 year ago

If the extension is installed, Vladomatic crashes. The UI can connect, but no models will load. Cannot generate images. When the extension is removed (delete the directory sd-webui-regional-prompter in extensions dir), SD (Vladomatic) runs.

Here is the error when the extension is installed:

Init / preset error. yaml False load Sadtalker Checkpoints from C:\Users\Frank\Vlad\automatic\extensions\SadTalker\checkpoints 2023-06-07-22:40:47 image_browser.py: Calling script: C:\Users\Frank\Vlad\automatic\extensions\sd-webui-regional-prompter\scripts\rp.py/ui: AttributeError ┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐ │ C:\Users\Frank\Vlad\automatic\modules\scripts.py:249 in wrap_call │ │ │ │ 248 │ try: │ │ > 249 │ │ res = func(*args, *kwargs) │ │ 250 │ │ return res │ │ │ │ C:\Users\Frank\Vlad\automatic\extensions\sd-webui-regional-prompter\scripts\rp.py:323 in ui │ │ │ │ 322 │ │ │ w, h = self.t2i_w, self.t2i_h │ │ > 323 │ │ maketemp.click(fn=makeimgtmp, inputs =[ratios,mmode,usecom,usebase,w,h],outputs │ │ 324 │ │ applypresets.click(fn=setpreset, inputs = [availablepresets, settings], outputs │ │ │ │ ... 1 frames hidden ... │ │ │ │ C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py:267 in set_event_trigger │ │ │ │ 266 │ │ │ "trigger": event_name, │ │ > 267 │ │ │ "inputs": [block._id for block in inputs], │ │ 268 │ │ │ "outputs": [block._id for block in outputs], │ │ │ │ C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py:267 in │ │ │ │ 266 │ │ │ "trigger": event_name, │ │ > 267 │ │ │ "inputs": [block._id for block in inputs], │ │ 268 │ │ │ "outputs": [block._id for block in outputs], │ └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ AttributeError: 'NoneType' object has no attribute '_id' Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\Vlad\automatic\launch.py", line 169, in instance = start_server(immediate=False, server=instance) File "C:\Users\Frank\Vlad\automatic\launch.py", line 133, in start_server server = server.webui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 284, in webui start_ui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 255, in start_ui shared.log.info(f'Local URL: {local_url}') Message: 'Local URL: http://127.0.0.1:7860/' Arguments: () --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\Vlad\automatic\launch.py", line 169, in instance = start_server(immediate=False, server=instance) File "C:\Users\Frank\Vlad\automatic\launch.py", line 133, in start_server server = server.webui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 284, in webui start_ui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 256, in start_ui shared.log.info(f'API Docs: {local_url[:-1]}/docs') # {local_url[:-1]}?view=api Message: 'API Docs: http://127.0.0.1:7860/docs' Arguments: () --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\Vlad\automatic\launch.py", line 169, in instance = start_server(immediate=False, server=instance) File "C:\Users\Frank\Vlad\automatic\launch.py", line 133, in start_server server = server.webui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 284, in webui start_ui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 261, in start_ui setup_middleware(app, cmd_opts) File "C:\Users\Frank\Vlad\automatic\modules\middleware.py", line 21, in setup_middleware log.info('Initializing middleware') Message: 'Initializing middleware' Arguments: () --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\Vlad\automatic\launch.py", line 169, in instance = start_server(immediate=False, server=instance) File "C:\Users\Frank\Vlad\automatic\launch.py", line 133, in start_server server = server.webui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 284, in webui start_ui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 273, in start_ui modules.script_callbacks.app_started_callback(shared.demo, app) File "C:\Users\Frank\Vlad\automatic\modules\script_callbacks.py", line 134, in app_started_callback c.callback(demo, app) File "C:\Users\Frank\Vlad\automatic\extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py", line 417, in on_app_started task_runner.execute_pending_tasks_threading() File "C:\Users\Frank\Vlad\automatic\extensions-builtin\sd-webui-agent-scheduler\agent_scheduler\task_runner.py", line 324, in execute_pending_tasks_threading pending_task = self.get_pending_task() File "C:\Users\Frank\Vlad\automatic\extensions-builtin\sd-webui-agent-scheduler\agent_scheduler\task_runner.py", line 438, in get_pending_task log.info("[AgentScheduler] Task queue is empty") Message: '[AgentScheduler] Task queue is empty' Arguments: () --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\Vlad\automatic\launch.py", line 169, in instance = start_server(immediate=False, server=instance) File "C:\Users\Frank\Vlad\automatic\launch.py", line 133, in start_server server = server.webui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 284, in webui start_ui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 273, in start_ui modules.script_callbacks.app_started_callback(shared.demo, app) File "C:\Users\Frank\Vlad\automatic\modules\script_callbacks.py", line 134, in app_started_callback c.callback(demo, app) File "C:\Users\Frank\Vlad\automatic\extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py", line 418, in on_app_started regsiter_apis(app, task_runner) File "C:\Users\Frank\Vlad\automatic\extensions-builtin\sd-webui-agent-scheduler\agent_scheduler\api.py", line 22, in regsiter_apis log.info("[AgentScheduler] Registering APIs") Message: '[AgentScheduler] Registering APIs' Arguments: () --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\Vlad\automatic\launch.py", line 169, in instance = start_server(immediate=False, server=instance) File "C:\Users\Frank\Vlad\automatic\launch.py", line 133, in start_server server = server.webui() File "C:\Users\Frank\Vlad\automatic\webui.py", line 286, in webui log.info(f"Startup time: {startup_timer.summary()}") Message: 'Startup time: 6.3s (scripts=1.2s onchange=0.2s ui=3.7s launch=0.5s scripts app_started_callback=0.2s checkpoint=0.6s)' Arguments: () --- Logging error --- Traceback (most recent call last): Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init__.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py", line 1326, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) ValueError: not enough values to unpack (expected 5, got 0) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py", line 1229, in postprocess_data self.validate_outputs(fn_index, predictions) # type: ignore Call stack: File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py", line 1204, in validate_outputs raise ValueError( ValueError: An event handler didn't receive enough output values (needed: 276, received: 2). Wanted outputs: [dropdown, slider, slider, dropdown, dropdown, checkbox, checkbox, checkbox, radio, checkboxgroup, slider, slider, slider, radio, checkbox, checkbox, checkbox, slider, radio, slider, radio, radio, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, radio, checkbox, checkbox, checkbox, textbox, checkbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, checkbox, dropdown, textbox, checkbox, checkbox, dropdown, checkbox, checkbox, checkbox, slider, checkbox, textbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, slider, checkbox, number, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, textbox, slider, checkbox, checkbox, colorpicker, slider, slider, slider, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, dropdown, radio, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, textbox, slider, slider, textbox, dropdown, dropdown, textbox, textbox, textbox, textbox, checkbox, checkbox, checkbox, checkbox, textbox, slider, radio, radio, number, checkboxgroup, dropdown, dropdown, slider, slider, radio, slider, slider, slider, slider, number, checkbox, radio, radio, slider, checkbox, dropdown, dropdown, slider, checkbox, checkbox, checkbox, checkbox, textbox, textbox, textbox, number, number, checkbox, checkbox, number, checkbox, checkbox, slider, slider, slider, number, checkboxgroup, slider, checkbox, checkbox, checkbox, textbox, dropdown, checkboxgroup, slider, slider, slider, slider, checkbox, checkbox, slider, checkbox, slider, slider, checkbox, checkbox, checkbox, radio, slider, checkbox, dropdown, slider, number, number, textbox, dropdown, dropdown, dropdown, radio, checkbox, checkbox, slider, checkbox, slider, checkbox, checkbox, checkbox, checkbox, radio, slider, slider, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, colorpicker, slider, checkbox, radio, checkbox, radio, textbox, textbox, textbox, textbox, textbox, slider, slider, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, textbox, dropdown, checkbox, checkbox, checkbox, checkbox, checkbox, dropdown, checkbox, checkbox, checkbox, checkbox, checkbox, number, number, number, checkbox, number, checkbox, checkbox, dropdown, checkbox, textbox, checkbox, number, checkbox, checkbox, checkbox, number, checkbox, checkbox] Received outputs: [{'value': None, 'choices': ['512-base-ema.ckpt [d635794c1f]', '512-depth-ema.ckpt [ece79d931a]', 'abyssorangerobuttsmi_x2.safetensors [37cfbd3981]', 'airoticartsPenis_10.ckpt [2efd512cc1]', 'analog-diffusion-1.0.ckpt [a9a1c90893]', 'anything-inpainting.ckpt [92cb44cc9a]', 'anythingelseV4_v45.safetensors [6e430eb514]', 'archerDiffusion_v1.ckpt [c9798ee288]', 'Basil_mixfixed.safetensors [0ff127093f]', 'cafe-instagram-unofficial-test-epoch-9-140k-images-fp32.ckpt [892cdbb138]', 'chilloutmix.safetensors [a757fe8b3d]', 'control_sd15_openpose.ckpt [d19ffffeea]', 'CounterfeitV25_25.safetensors [a074b8864e]', 'deliberate_v2.safetensors [9aba26abdf]', 'dreamlike-photoreal-2.0.ckpt [fc52756a74]', 'dreamlikeHassan_dreamlike2Hassans13.safetensors [9d883aeb70]', 'dreamshaper_6BakedVae.safetensors [b76cc78ad9]', 'dreamshaper_33BakedVae.safetensors [1dceefec07]', 'EldrethsLucidMix-Inpainting.inpainting.safetensors [69a73a57af]', 'elldrethsLucidMix_v10.safetensors [67abd65708]', 'galaxytimemachinesGTM_v3(1).safetensors [f8ad2aafb5]', 'hassanblend1512And_hassanblend1512.safetensors [f05dd9e62f]', 'homoerotic_v2.safetensors [b656369cf7]', 'movieDiffusionV10_v10.ckpt [b32017a54b]', 'neverendingDreamNED_v122BakedVae.safetensors [ecefb796ff]', 'openjourney-v4.ckpt [02e37aad9f]', 'openjourney_v1.ckpt [5d5ad06cc2]', 'protogenV22Anime_22.safetensors [1254103966]', 'protogenX34Photorealism_1.safetensors [44f90a0972]', 'protogenX58RebuiltScifi_10.safetensors [6a21b428a3]', 'Realistic_Vision_V1.4-inpainting.ckpt [bc84c10c7e]', 'Realistic_Vision_V1.4.ckpt [660d4d07d6]', 'realisticVisionV20_v20.safetensors [c0d1994c73]', 'refdalorange_10.ckpt [c9ed4d7eda]', 'revAnimated_v11.safetensors [d725be5d18]', 'SD15NewVAEpruned.ckpt [27a4ac756c]', 'sd-v1-5-inpainting.ckpt [c6bbc15e32]', 'uberRealisticPornMerge_urpmv13.safetensors [f93e6a50ac]', 'unstablephotorealv5_05.ckpt [6ef4ed10a6]', 'v1-5-pruned-emaonly.safetensors [6ce0161689]', 'v1-5-pruned.ckpt [e1441589a6]', 'vintevivid_vintevivid.safetensors [a425fcac4f]'], 'type__': 'generic_update'}, "{"sd_model_checkpoint": null, "sd_checkpoint_cache": 0, "sd_vae_checkpoint_cache": 0, "sd_vae": "vae-ft-mse-840000-ema-pruned.ckpt", "sd_model_dict": "None", "sd_vae_sliced_encode": false, "stream_load": false, "model_reuse_dict": false, "cross_attention_optimization": "Scaled-Dot-Product", "cross_attention_options": [], "sub_quad_q_chunk_size": 512, "sub_quad_kv_chunk_size": 512, "sub_quad_chunk_threshold": 80, "prompt_attention": "Full parser", "prompt_mean_norm": true, "always_batch_cond_uncond": false, "enable_quantization": true, "comma_padding_backtrack": 20, "sd_backend": "Original", "memmon_poll_rate": 2, "precision": "Autocast", "cuda_dtype": "FP16", "no_half": false, "no_half_vae": false, "upcast_sampling": false, "upcast_attn": false, "disable_nan_check": true, "rollback_vae": false, "opt_channelslast": false, "cudnn_benchmark": false, "cuda_allow_tf32": true, "cuda_allow_tf16_reduced": true, "cuda_compile": false, "cuda_compile_mode": "none", "cuda_compile_verbose": false, "cuda_compile_errors": true, "disable_gc": false, "temp_dir": "", "clean_temp_dir_at_start": true, "ckpt_dir": "C:\Users\Frank\Vlad\automatic\models\Stable-diffusion", "diffusers_dir": "C:\Users\Frank\Vlad\automatic\models\Diffusers", "vae_dir": "C:\Users\Frank\Vlad\automatic\models\VAE", "lora_dir": "C:\Users\Frank\Vlad\automatic\models\Lora", "lyco_dir": "C:\Users\Frank\Vlad\automatic\models\LyCORIS", "styles_dir": "C:\Users\Frank\Vlad\automatic\styles.csv", "embeddings_dir": "C:\Users\Frank\Vlad\automatic\models\embeddings", "hypernetwork_dir": "C:\Users\Frank\Vlad\automatic\models\hypernetworks", "codeformer_models_path": "C:\Users\Frank\Vlad\automatic\models\Codeformer", "gfpgan_models_path": "C:\Users\Frank\Vlad\automatic\models\GFPGAN", "esrgan_models_path": "C:\Users\Frank\Vlad\automatic\models\ESRGAN", "bsrgan_models_path": "C:\Users\Frank\Vlad\automatic\models\BSRGAN", "realesrgan_models_path": "C:\Users\Frank\Vlad\automatic\models\RealESRGAN", "scunet_models_path": "C:\Users\Frank\Vlad\automatic\models\ScuNET", "swinir_models_path": "C:\Users\Frank\Vlad\automatic\models\SwinIR", "ldsr_models_path": "C:\Users\Frank\Vlad\automatic\models\LDSR", "clip_models_path": "C:\Users\Frank\Vlad\automatic\models\CLIP", "samples_save": true, "samples_format": "jpg", "samples_filename_pattern": "[seed]-[prompt_spaces]", "save_images_add_number": true, "grid_save": true, "grid_format": "jpg", "grid_extended_filename": true, "grid_only_if_multiple": true, "grid_prevent_empty_spots": true, "n_rows": -1, "save_txt": false, "save_log_fn": "", "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "save_init_img": false, "jpeg_quality": 85, "webp_lossless": false, "img_max_size_mp": 250, "use_original_name_batch": true, "use_upscaler_name_as_suffix": true, "save_selected_only": true, "save_to_dirs": false, "grid_save_to_dirs": false, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "inpainting_mask_weight": 1.0, "initial_noise_multiplier": 1.0, "CLIP_stop_at_last_layers": 1, "outdir_samples": "", "outdir_txt2img_samples": "C:\Users\Frank\Vlad\automatic\outputs/text", "outdir_img2img_samples": "C:\Users\Frank\Vlad\automatic\outputs/image", "outdir_extras_samples": "C:\Users\Frank\Vlad\automatic\outputs/extras", "outdir_grids": "", "outdir_txt2img_grids": "C:\Users\Frank\Vlad\automatic\outputs/grids", "outdir_img2img_grids": "C:\Users\Frank\Vlad\automatic\outputs/grids", "outdir_save": "C:\Users\Frank\Vlad\automatic\outputs/save", "outdir_init_images": "outputs/init-images", "gradio_theme": "black-orange", "theme_style": "Auto", "return_grid": true, "return_mask": false, "return_mask_composite": false, "disable_weights_auto_swap": true, "send_seed": true, "send_size": true, "font": "", "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\/!?%^;:{}=`~()", "quicksettings_list": ["sd_model_checkpoint"], "hidden_tabs": [], "ui_tab_reorder": "From Text, From Image, Process Image", "ui_scripts_reorder": "Enable Dynamic Thresholding, ControlNet", "ui_reorder": "inpaint, sampler, checkboxes, hires_fix, dimensions, cfg, seed, batch, override_settings, scripts", "ui_extra_networks_tab_reorder": "Checkpoints, Lora, LyCORIS, Textual Inversion, Hypernetworks", "show_progressbar": true, "live_previews_enable": false, "show_progress_grid": false, "notification_audio_enable": false, "notification_audio_path": "html/notification.mp3", "show_progress_every_n_steps": 1, "show_progress_type": "TAESD", "live_preview_content": "Combined", "live_preview_refresh_period": 250, "show_samplers": ["Euler a", "UniPC", "DDIM", "DPM++ 2M SDE", "DPM++ 2M SDE Karras", "DPM2 Karras", "DPM++ 2M Karras"], "fallback_sampler": "Euler a", "force_latent_sampler": "None", "eta_ancestral": 1.0, "eta_ddim": 0.0, "ddim_discretize": "uniform", "s_churn": 0.0, "s_min_uncond": 0, "s_tmin": 0.0, "s_noise": 1.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "postprocessing_enable_in_main_ui": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "unload_models_when_training": false, "pin_memory": true, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "embeddings_templates_dir": "C:\Users\Frank\Vlad\automatic\train\templates", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 0, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120, "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": true, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 32, "interrogate_clip_max_length": 192, "interrogate_clip_dict_limit": 2048, "interrogate_clip_skip_categories": ["artists", "movements", "flavors"], "interrogate_deepbooru_score_threshold": 0.65, "deepbooru_sort_alpha": false, "deepbooru_use_spaces": false, "deepbooru_escape": true, "deepbooru_filter_tags": "", "upscaler_for_img2img": "None", "realesrgan_enabled_models": ["R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B"], "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "ldsr_steps": 100, "ldsr_cached": false, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "lyco_patch_lora": false, "lora_disable": false, "lora_functional": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.2, "face_restoration_unload": false, "extra_networks_default_view": "cards", "extra_networks_default_multiplier": 1.0, "extra_networks_card_width": 0, "extra_networks_card_height": 0, "extra_networks_add_text_separator": " ", "sd_hypernetwork": "None", "sd_lyco": "None", "sd_lora": "None", "lora_preferred_name": "Alias from file", "lora_add_hashes_to_infotext": true, "token_merging": false, "token_merging_ratio": 0.5, "token_merging_hr_only": true, "token_merging_ratio_hr": 0.5, "token_merging_random": false, "token_merging_merge_attention": true, "token_merging_merge_cross_attention": false, "token_merging_merge_mlp": false, "token_merging_maximum_down_sampling": 1, "token_merging_stride_x": 2, "token_merging_stride_y": 2, "disabled_extensions": [], "disable_all_extensions": "none", "sd_checkpoint_hash": "d635794c1fedfdfa261e065370bea59c651fc9bfa65dc6d67ad29e11869a1824", "canvas_hotkey_move": "F", "canvas_hotkey_fullscreen": "S", "canvas_hotkey_reset": "R", "canvas_zoom_hotkey_open_colorpanel": "Q", "canvas_zoom_hotkey_pin_colorpanel": "T", "canvas_zoom_hotkey_dropper": "A", "canvas_zoom_hotkey_fill": "H", "canvas_zoom_hotkey_transparency": "C", "canvas_hotkey_overlap": "O", "canvas_show_tooltip": true, "canvas_swap_controls": false, "canvas_zoom_mask_clear": true, "canvas_zoom_brush_outline": false, "canvas_zoom_enable_integration": false, "canvas_zoom_add_buttons": false, "canvas_zoom_inpaint_brushcolor": "#000000", "canvas_zoom_transparency_level": 60, "queue_paused": false, "queue_button_placement": "Under Generate button", "queue_button_hide_checkpoint": true, "queue_history_retention_days": "30 days", "control_net_model_config": "models\cldm_v15.yaml", "control_net_model_adapter_config": "models\t2iadapter_sketch_sd14v1.yaml", "control_net_detectedmap_dir": "detected_maps", "control_net_models_path": "", "control_net_modules_path": "", "control_net_max_models_num": 3, "control_net_model_cache_size": 1, "control_net_no_detectmap": false, "control_net_detectmap_autosaving": false, "control_net_allow_script_control": false, "control_net_sync_field_args": false, "controlnet_show_batch_images_in_ui": false, "controlnet_increment_seed_during_batch": false, "controlnet_disable_control_type": false, "controlnet_disable_openpose_edit": false, "image_browser_active_tabs": "txt2img, img2img, txt2img-grids, img2img-grids, Extras, Favorites, Others, All, Maintenance", "image_browser_hidden_components": [], "image_browser_with_subdirs": true, "image_browser_preload": false, "image_browser_copy_image": false, "image_browser_delete_message": true, "image_browser_txt_files": true, "image_browser_debug_level": "0 - none", "image_browser_delete_recycle": true, "image_browser_scan_exif": true, "image_browser_mod_shift": false, "image_browser_mod_ctrl_shift": false, "image_browser_ranking_pnginfo": false, "image_browser_page_columns": 6.0, "image_browser_page_rows": 6.0, "image_browser_pages_perload": 20.0, "image_browser_use_thumbnail": false, "image_browser_thumbnail_size": 200.0, "image_browser_swipe": false, "image_browser_img_tooltips": true, "image_browser_scoring_type": "aesthetic_score", "image_browser_show_progress": true, "sadtalker_result_dir": "C:\Users\Frank\Vlad\automatic\outputs\SadTalker", "depthmap_script_keepmodels": false, "depthmap_script_boost_rmax": 1600, "depthmap_script_save_ply": false, "depthmap_script_show_3d": true, "depthmap_script_show_3d_inpaint": true, "depthmap_script_mesh_maxsize": 2048, "regprp_debug": false, "regprp_hidepmask": false}"] File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) Traceback (most recent call last): File "C:\Users\Frank\Vlad\automatic\modules\sd_models.py", line 152, in update_model_hashes shared.log.info(f'Models list: short hash missing for {len(lst)} out of {len(checkpoints_list)} models') File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict output = await app.get_blocks().process_api( Message: 'Models list: short hash missing for 0 out of 42 models' Arguments: () File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api result = await self.call_function( --- Logging error --- File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( Traceback (most recent call last): File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init__.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\Vlad\automatic\extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py", line 164, in f raise Exception("Invalid call") Exception: Invalid call File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\Frank\Vlad\automatic\modules\sd_models.py", line 158, in update_model_hashes shared.log.info(f'Models list: full hash missing for {len(lst)} out of {len(checkpoints_list)} models') Message: 'Models list: full hash missing for 0 out of 42 models' Arguments: () Traceback (most recent call last): File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api result = await self.call_function( File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\Frank\Vlad\automatic\extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py", line 164, in f raise Exception("Invalid call") Exception: Invalid call --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging__init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\Frank\Vlad\automatic\modules\ui.py", line 1458, in fn=lambda value, k=k: run_settings_single(value, key=k), File "C:\Users\Frank\Vlad\automatic\modules\ui.py", line 1316, in run_settings_single if not opts.set(key, value): File "C:\Users\Frank\Vlad\automatic\modules\shared.py", line 593, in set self.data_labels[key].onchange() File "C:\Users\Frank\Vlad\automatic\modules\call_queue.py", line 16, in f res = func(*args, **kwargs) File "C:\Users\Frank\Vlad\automatic\webui.py", line 160, in shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False) File "C:\Users\Frank\Vlad\automatic\modules\sd_models.py", line 565, in reload_model_weights checkpoint_info = info or select_checkpoint(model=not load_dict) # are we selecting model or dictionary File "C:\Users\Frank\Vlad\automatic\modules\sd_models.py", line 205, in select_checkpoint shared.log.warning(f"Selected checkpoint not found: {model_checkpoint}") Message: 'Selected checkpoint not found: sd_model_checkpoint' Arguments: () --- Logging error --- Traceback (most recent call last): File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 1100, in emit msg = self.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init.py", line 943, in format return fmt.format(record) File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\logging\init__.py", line 681, in format s = self.formatMessage(record) File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\uvicorn\logging.py", line 104, in formatMessage ( ValueError: not enough values to unpack (expected 5, got 0) Call stack: File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "C:\Users\Frank\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\Frank\Vlad\automatic\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\Frank\Vlad\automatic\modules\ui.py", line 1458, in fn=lambda value, k=k: run_settings_single(value, key=k), File "C:\Users\Frank\Vlad\automatic\modules\ui.py", line 1316, in run_settings_single if not opts.set(key, value): File "C:\Users\Frank\Vlad\automatic\modules\shared.py", line 593, in set self.data_labels[key].onchange() File "C:\Users\Frank\Vlad\automatic\modules\call_queue.py", line 16, in f res = func(args, **kwargs) File "C:\Users\Frank\Vlad\automatic\webui.py", line 160, in shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False) File "C:\Users\Frank\Vlad\automatic\modules\sd_models.py", line 565, in reload_model_weights checkpoint_info = info or select_checkpoint(model=not load_dict) # are we selecting model or dictionary File "C:\Users\Frank\Vlad\automatic\modules\sd_models.py", line 206, in select_checkpoint shared.log.warning(f"Loading fallback checkpoint: {checkpoint_info.title}") Message: 'Loading fallback checkpoint: 512-base-ema.ckpt [d635794c1f]'

Symbiomatrix commented 1 year ago

@fcabanski Could be related to #143 . Can you please install this version and test?

Edit: I've reverted the PR, so latest version should return to the previous state.

fcabanski commented 1 year ago

"@fcabanski Could be related to https://github.com/hako-mikan/sd-webui-regional-prompter/pull/143 . Can you please install this version and test?"

That has fixed the issue.

Symbiomatrix commented 1 year ago

Good, closing.