AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
140.64k stars 26.61k forks source link

[Bug]: Preprocess Images Broken #13340

Open captainzero93 opened 1 year ago

captainzero93 commented 1 year ago

Is there an existing issue for this?

What happened?

Preprocessing [Image 0/31]: 0%| | 0/31 [00:04<?, ?it/s] Error completing request Arguments: ('task(f29xaxc78g6xmru)', 'C:\Users\redacted\Desktop\GITS\95', 'C:\Users\redacted\Desktop\GITS\95\proc', 1024, 1024, 'ignore', False, False, False, False, False, 0.5, 0.2, True, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1) {} Traceback (most recent call last): File "B:\ASSD16\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, *kwargs)) File "B:\ASSD16\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(args, *kwargs) File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\ui.py", line 19, in preprocess modules.textual_inversion.preprocess.preprocess(args) File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\preprocess.py", line 18, in preprocess preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_keep_original_size, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug, process_multicrop, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold) File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\preprocess.py", line 212, in preprocess_work for focal in autocrop.crop_image(img, autocrop_settings): File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py", line 32, in crop_image focus = focal_point(im_debug, settings) File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py", line 75, in focal_point face_points = image_face_points(im, settings) if settings.face_points_weight > 0 else [] File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py", line 150, in image_face_points faces = detector.detect(np.array(im)) cv2.error: OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\net_impl.cpp:279: error: (-204:Requested object was not found) Layer with requested id=-1 not found in function 'cv::dnn::dnn4_v20230620::Net::Impl::getLayerData'

Steps to reproduce the problem

  1. Go to Train
  2. Press Preprocess images
  3. Auto focal point crop
  4. click preprocess

What should have happened?

should have processed the images and put the in the folder

Sysinfo

{ "Platform": "Windows-10-10.0.22621-SP0", "Python": "3.10.6", "Version": "v1.6.0", "Commit": "5ef669de080814067961f28357256e8fe27544f4", "Script path": "B:\ASSD16\stable-diffusion-webui", "Data path": "B:\ASSD16\stable-diffusion-webui", "Extensions dir": "B:\ASSD16\stable-diffusion-webui\extensions", "Checksum": "e8692e6fb9c925d1b3107bb69f4c63b32ebb6b87ec8e30e2113e35aa67f81ba0", "Commandline": [ "launch.py", "--medvram", "--opt-split-attention", "--xformers" ], "Torch env info": { "torch_version": "2.0.1+cu118", "is_debug_build": "False", "cuda_compiled_version": "11.8", "gcc_version": null, "clang_version": null, "cmake_version": null, "os": "Microsoft Windows 11 Home", "libc_version": "N/A", "python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)", "python_platform": "Windows-10-10.0.22621-SP0", "is_cuda_available": "True", "cuda_runtime_version": null, "cuda_module_loading": "LAZY", "nvidia_driver_version": "537.34", "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3060", "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.1.1", "torchsde==0.2.5", "torchvision==0.15.2+cu118" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture=9", "CurrentClockSpeed=3401", "DeviceID=CPU0", "Family=107", "L2CacheSize=4096", "L2CacheSpeed=", "Manufacturer=AuthenticAMD", "MaxClockSpeed=3401", "Name=AMD Ryzen 7 5700X 8-Core Processor ", "ProcessorType=3", "Revision=8450" ] }, "Exceptions": [ { "exception": "OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\net_impl.cpp:279: error: (-204:Requested object was not found) Layer with requested id=-1 not found in function 'cv::dnn::dnn4_v20230620::Net::Impl::getLayerData'\n", "traceback": [ [ "B:\ASSD16\stable-diffusion-webui\modules\call_queue.py, line 57, f", "res = list(func(*args, *kwargs))" ], [ "B:\ASSD16\stable-diffusion-webui\modules\call_queue.py, line 36, f", "res = func(args, *kwargs)" ], [ "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\ui.py, line 19, preprocess", "modules.textual_inversion.preprocess.preprocess(args)" ], [ "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\preprocess.py, line 18, preprocess", "preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_keep_original_size, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug, process_multicrop, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold)" ], [ "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\preprocess.py, line 212, preprocess_work", "for focal in autocrop.crop_image(img, autocrop_settings):" ], [ "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py, line 32, crop_image", "focus = focal_point(im_debug, settings)" ], [ "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py, line 75, focal_point", "face_points = image_face_points(im, settings) if settings.face_points_weight > 0 else []" ], [ "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py, line 150, image_face_points", "faces = detector.detect(np.array(im))" ] ] } ], "CPU": { "model": "AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD", "count logical": 16, "count physical": 8 }, "RAM": { "total": "32GB", "used": "15GB", "free": "17GB" }, "Extensions": [ { "name": "adetailer", "path": "B:\ASSD16\stable-diffusion-webui\extensions\adetailer", "version": "910bf3b9", "branch": "main", "remote": "https://github.com/Bing-su/adetailer.git" }, { "name": "sd-dynamic-prompts", "path": "B:\ASSD16\stable-diffusion-webui\extensions\sd-dynamic-prompts", "version": "39c06b30", "branch": "main", "remote": "https://github.com/adieyal/sd-dynamic-prompts.git" }, { "name": "stable-diffusion-webui-wd14-tagger", "path": "B:\ASSD16\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger", "version": "56086928", "branch": "master", "remote": "https://github.com/picobyte/stable-diffusion-webui-wd14-tagger.git" }, { "name": "stable-diffusion-webui-wildcards", "path": "B:\ASSD16\stable-diffusion-webui\extensions\stable-diffusion-webui-wildcards", "version": "c7d49e18", "branch": "master", "remote": "https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards" } ], "Inactive extensions": [ { "name": "sd_smartprocess", "path": "B:\ASSD16\stable-diffusion-webui\extensions\sd_smartprocess", "version": "41fb35ef", "branch": "main", "remote": "https://github.com/d8ahazard/sd_smartprocess" } ], "Environment": { "COMMANDLINE_ARGS": " --medvram --opt-split-attention --xformers", "GRADIO_ANALYTICS_ENABLED": "False" }, "Config": { "samples_save": true, "samples_format": "png", "samples_filename_pattern": "", "save_images_add_number": true, "grid_save": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "grid_prevent_empty_spots": false, "grid_zip_filename_pattern": "", "n_rows": -1, "font": "", "grid_text_active_color": "#000000", "grid_text_inactive_color": "#999999", "grid_background_color": "#ffffff", "enable_pnginfo": true, "save_txt": false, "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "jpeg_quality": 80, "webp_lossless": false, "export_for_4chan": true, "img_downscale_threshold": 4.0, "target_side_length": 4000, "img_max_size_mp": 200, "use_original_name_batch": true, "use_upscaler_name_as_suffix": false, "save_selected_only": true, "save_init_img": false, "temp_dir": "", "clean_temp_dir_at_start": false, "save_incomplete_images": false, "outdir_samples": "", "outdir_txt2img_samples": "outputs/txt2img-images", "outdir_img2img_samples": "outputs/img2img-images", "outdir_extras_samples": "outputs/extras-images", "outdir_grids": "", "outdir_txt2img_grids": "outputs/txt2img-grids", "outdir_img2img_grids": "outputs/img2img-grids", "outdir_save": "log/images", "outdir_init_images": "outputs/init-images", "save_to_dirs": true, "grid_save_to_dirs": true, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "realesrgan_enabled_models": [ "R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B" ], "upscaler_for_img2img": null, "face_restoration": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "face_restoration_unload": false, "auto_launch_browser": "Local", "show_warnings": false, "show_gradio_deprecation_warnings": true, "memmon_poll_rate": 8, "samples_log_stdout": false, "multiple_tqdm": false, "print_hypernet_extra": false, "list_hidden_files": true, "disable_mmap_load_safetensors": false, "hide_ldm_prints": true, "api_enable_requests": true, "api_forbid_local_requests": true, "api_useragent": "", "unload_models_when_training": false, "pin_memory": false, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 500, "training_xattention_optimizations": false, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120, "sd_model_checkpoint": "NuclearAnimeSDXLDiffusion.safetensors [e8272d6146]", "sd_checkpoints_limit": 1, "sd_checkpoints_keep_in_cpu": true, "sd_checkpoint_cache": 0, "sd_unet": "Automatic", "enable_quantization": false, "enable_emphasis": true, "enable_batch_seeds": true, "comma_padding_backtrack": 20, "CLIP_stop_at_last_layers": 1, "upcast_attn": false, "randn_source": "GPU", "tiling": false, "hires_fix_refiner_pass": "second pass", "sdxl_crop_top": 0, "sdxl_crop_left": 0, "sdxl_refiner_low_aesthetic_score": 2.5, "sdxl_refiner_high_aesthetic_score": 6.0, "sd_vae_explanation": "VAE is a neural network that transforms a standard RGB\nimage into latent space representation and back. Latent space representation is what stable diffusion is working on during sampling\n(i.e. when the progress bar is between empty and full). For txt2img, VAE is used to create a resulting image after the sampling is finished.\nFor img2img, VAE is used to process user's input image before the sampling, and to create an image after sampling.", "sd_vae_checkpoint_cache": 0, "sd_vae": "sdxl_vae.safetensors", "sd_vae_overrides_per_model_preferences": true, "auto_vae_precision": true, "sd_vae_encode_method": "Full", "sd_vae_decode_method": "Full", "inpainting_mask_weight": 1.0, "initial_noise_multiplier": 1.0, "img2img_extra_noise": 0.0, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "img2img_editor_height": 720, "img2img_sketch_default_brush_color": "#ffffff", "img2img_inpaint_mask_brush_color": "#ffffff", "img2img_inpaint_sketch_default_brush_color": "#ffffff", "return_mask": false, "return_mask_composite": false, "cross_attention_optimization": "Automatic", "s_min_uncond": 0.0, "token_merging_ratio": 0.0, "token_merging_ratio_img2img": 0.0, "token_merging_ratio_hr": 0.0, "pad_cond_uncond": false, "persistent_cond_cache": true, "batch_cond_uncond": true, "use_old_emphasis_implementation": false, "use_old_karras_scheduler_sigmas": false, "no_dpmpp_sde_batch_determinism": false, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "hires_fix_use_firstpass_conds": false, "use_old_scheduling": false, "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": false, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 24, "interrogate_clip_max_length": 48, "interrogate_clip_dict_limit": 1500, "interrogate_clip_skip_categories": [], "interrogate_deepbooru_score_threshold": 0.5, "deepbooru_sort_alpha": true, "deepbooru_use_spaces": true, "deepbooru_escape": true, "deepbooru_filter_tags": "", "extra_networks_show_hidden_directories": true, "extra_networks_hidden_models": "When searched", "extra_networks_default_multiplier": 1.0, "extra_networks_card_width": 0, "extra_networks_card_height": 0, "extra_networks_card_text_scale": 1.0, "extra_networks_card_show_desc": true, "extra_networks_add_text_separator": " ", "ui_extra_networks_tab_reorder": "", "textual_inversion_print_at_load": false, "textual_inversion_add_hashes_to_infotext": true, "sd_hypernetwork": "None", "localization": "None", "gradio_theme": "Default", "gradio_themes_cache": true, "gallery_height": "", "return_grid": true, "do_not_show_images": false, "send_seed": true, "send_size": true, "js_modal_lightbox": true, "js_modal_lightbox_initially_zoomed": true, "js_modal_lightbox_gamepad": false, "js_modal_lightbox_gamepad_repeat": 250, "show_progress_in_title": true, "samplers_in_dropdown": true, "dimensions_and_batch_together": true, "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\/!?%^*;:{}=`~()", "keyedit_move": true, "quicksettings_list": [ "sd_model_checkpoint" ], "ui_tab_order": [], "hidden_tabs": [], "ui_reorder_list": [], "hires_fix_show_sampler": false, "hires_fix_show_prompts": false, "disable_token_counters": false, "add_model_hash_to_info": true, "add_model_name_to_info": true, "add_user_name_to_info": false, "add_version_to_infotext": true, "disable_weights_auto_swap": true, "infotext_styles": "Apply if any", "show_progressbar": true, "live_previews_enable": true, "live_previews_image_format": "png", "show_progress_grid": true, "show_progress_every_n_steps": 10, "show_progress_type": "Approx NN", "live_preview_allow_lowvram_full": false, "live_preview_content": "Prompt", "live_preview_refresh_period": 1000, "live_preview_fast_interrupt": false, "hide_samplers": [], "eta_ddim": 0.0, "eta_ancestral": 1.0, "ddim_discretize": "uniform", "s_churn": 0.0, "s_tmin": 0.0, "s_tmax": 0.0, "s_noise": 1.0, "k_sched_type": "Automatic", "sigma_min": 0.0, "sigma_max": 0.0, "rho": 0.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "sgm_noise_multiplier": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "postprocessing_enable_in_main_ui": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "disabled_extensions": [ "sd_smartprocess" ], "disable_all_extensions": "none", "restore_config_state_file": "", "sd_checkpoint_hash": "e8272d614663bea8befa7c2069f6f72bec8f9461e5b5bf63fee08f63723d3713", "ldsr_steps": 100, "ldsr_cached": false, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "lora_functional": false, "sd_lora": "None", "lora_preferred_name": "Alias from file", "lora_add_hashes_to_infotext": true, "lora_show_all": false, "lora_hide_unknown_for_versions": [], "lora_in_memory_limit": 0, "extra_options_txt2img": [], "extra_options_img2img": [], "extra_options_cols": 1, "extra_options_accordion": false, "canvas_hotkey_zoom": "Alt", "canvas_hotkey_adjust": "Ctrl", "canvas_hotkey_move": "F", "canvas_hotkey_fullscreen": "S", "canvas_hotkey_reset": "R", "canvas_hotkey_overlap": "O", "canvas_show_tooltip": true, "canvas_auto_expand": true, "canvas_blur_prompt": false, "canvas_disabled_functions": [ "Overlap" ] }, "Startup": { "total": 15.437963724136353, "records": { "initial startup": 0.0, "prepare environment/checks": 0.021569490432739258, "prepare environment/git version info": 0.07708621025085449, "prepare environment/torch GPU test": 1.7982056140899658, "prepare environment/clone repositores": 0.25705599784851074, "prepare environment/run extensions installers/adetailer": 0.14883089065551758, "prepare environment/run extensions installers/sd-dynamic-prompts": 0.1687178611755371, "prepare environment/run extensions installers/stable-diffusion-webui-wd14-tagger": 2.252871036529541, "prepare environment/run extensions installers/stable-diffusion-webui-wildcards": 0.0, "prepare environment/run extensions installers": 2.5704197883605957, "prepare environment": 4.777605056762695, "launcher": 0.0019996166229248047, "import torch": 4.890209674835205, "import gradio": 0.6972582340240479, "setup paths": 0.6421585083007812, "import ldm": 0.005000591278076172, "import sgm": 0.0, "initialize shared": 0.21243691444396973, "other imports": 0.42130517959594727, "opts onchange": 0.0, "setup SD model": 0.0020093917846679688, "setup codeformer": 0.09157180786132812, "setup gfpgan": 0.017042160034179688, "set samplers": 0.0, "list extensions": 0.0010099411010742188, "restore config state file": 0.0, "list SD models": 0.002999544143676758, "list localizations": 0.0009999275207519531, "load scripts/custom_code.py": 0.002513408660888672, "load scripts/img2imgalt.py": 0.0, "load scripts/loopback.py": 0.0009996891021728516, "load scripts/outpainting_mk_2.py": 0.0, "load scripts/poor_mans_outpainting.py": 0.0, "load scripts/postprocessing_codeformer.py": 0.0, "load scripts/postprocessing_gfpgan.py": 0.0009996891021728516, "load scripts/postprocessing_upscale.py": 0.0, "load scripts/prompt_matrix.py": 0.0, "load scripts/prompts_from_file.py": 0.0, "load scripts/refiner.py": 0.0010004043579101562, "load scripts/sd_upscale.py": 0.0, "load scripts/seed.py": 0.0, "load scripts/xyz_grid.py": 0.0010569095611572266, "load scripts/!adetailer.py": 2.5371451377868652, "load scripts/dynamic_prompting.py": 0.02958846092224121, "load scripts/tagger.py": 0.12101340293884277, "load scripts/wildcards.py": 0.017557382583618164, "load scripts/ldsr_model.py": 0.020200014114379883, "load scripts/lora_script.py": 0.11548686027526855, "load scripts/scunet_model.py": 0.021061182022094727, "load scripts/swinir_model.py": 0.019547224044799805, "load scripts/hotkey_config.py": 0.0, "load scripts/extra_options_section.py": 0.0, "load scripts": 2.888169765472412, "load upscalers": 0.003000020980834961, "refresh VAE": 0.0, "refresh textual inversion templates": 0.0, "scripts list_optimizers": 0.0020241737365722656, "scripts list_unets": 0.0, "reload hypernetworks": 0.005010128021240234, "initialize extra networks": 0.017069101333618164, "scripts before_ui_callback": 0.0019998550415039062, "create ui": 0.6132962703704834, "gradio launch": 0.18902254104614258, "add APIs": 0.006023406982421875, "app_started_callback/tagger.py": 0.002009868621826172, "app_started_callback/lora_script.py": 0.0, "app_started_callback": 0.002009868621826172 } }, "Packages": [ "-rotobuf==3.20.0", "absl-py==1.4.0", "accelerate==0.21.0", "addict==2.4.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohttp==3.8.5", "aiosignal==1.3.1", "altair==5.1.1", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "asttokens==2.4.0", "astunparse==1.6.3", "async-timeout==4.0.3", "attrs==23.1.0", "backcall==0.2.0", "basicsr==1.4.2", "beautifulsoup4==4.12.2", "blendmodes==2022", "boltons==23.0.0", "cachetools==5.3.1", "certifi==2023.7.22", "cffi==1.15.1", "charset-normalizer==3.2.0", "clean-fid==0.1.35", "click==8.1.7", "clip==1.0", "colorama==0.4.6", "coloredlogs==15.0.1", "contourpy==1.1.0", "cycler==0.11.0", "decorator==5.1.1", "deepdanbooru==1.0.2", "deprecation==2.1.0", "dynamicprompts==0.29.0", "einops==0.4.1", "exceptiongroup==1.1.3", "executing==1.2.0", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.3.1", "filelock==3.12.2", "filterpy==1.4.5", "flatbuffers==23.5.26", "fonttools==4.42.1", "frozenlist==1.4.0", "fsspec==2023.9.0", "ftfy==6.1.1", "future==0.18.3", "gast==0.4.0", "gdown==4.7.1", "gfpgan==1.3.8", "gitdb==4.0.10", "gitpython==3.1.32", "google-auth-oauthlib==1.0.0", "google-auth==2.22.0", "google-pasta==0.2.0", "gradio-client==0.5.0", "gradio==3.41.2", "grpcio==1.58.0", "h11==0.12.0", "h5py==3.9.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.16.4", "humanfriendly==10.0", "idna==3.4", "imageio==2.31.3", "importlib-metadata==6.8.0", "importlib-resources==6.0.1", "inflection==0.5.1", "ipython==8.6.0", "jedi==0.19.0", "jinja2==3.1.2", "jsonmerge==1.8.0", "jsonschema-specifications==2023.7.1", "jsonschema==4.19.0", "keras==2.13.1", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy-loader==0.3", "libclang==16.0.6", "lightning-utilities==0.9.0", "llvmlite==0.40.1", "lmdb==1.4.1", "lpips==0.1.4", "markdown-it-py==3.0.0", "markdown==3.4.4", "markupsafe==2.1.3", "matplotlib-inline==0.1.6", "matplotlib==3.7.2", "mdurl==0.1.2", "mediapipe==0.10.5", "mpmath==1.3.0", "multidict==6.0.4", "networkx==3.1", "numba==0.57.1", "numpy==1.23.5", "oauthlib==3.2.2", "omegaconf==2.2.3", "onnxruntime-gpu==1.15.1", "open-clip-torch==2.20.0", "opencv-contrib-python==4.8.0.76", "opencv-python-headless==4.8.0.76", "opencv-python==4.8.0.76", "opt-einsum==3.3.0", "orjson==3.9.7", "packaging==23.1", "pandas==2.1.0", "parso==0.8.3", "pickleshare==0.7.5", "piexif==1.1.3", "pillow==9.5.0", "pip==22.2.1", "platformdirs==3.10.0", "prompt-toolkit==3.0.39", "protobuf==3.20.3", "psutil==5.9.5", "pure-eval==0.2.2", "py-cpuinfo==9.0.0", "pyasn1-modules==0.3.0", "pyasn1==0.5.0", "pycparser==2.21", "pydantic==1.10.12", "pydub==0.25.1", "pygments==2.16.1", "pyparsing==3.0.9", "pyreadline3==3.4.1", "pysocks==1.7.1", "python-dateutil==2.8.2", "python-multipart==0.0.6", "pytorch-lightning==1.9.4", "pytz==2023.3.post1", "pywavelets==1.4.1", "pyyaml==6.0.1", "realesrgan==0.3.0", "referencing==0.30.2", "regex==2023.8.8", "requests-oauthlib==1.3.1", "requests==2.31.0", "resize-right==0.0.2", "rich==13.5.2", "rpds-py==0.10.2", "rsa==4.9", "safetensors==0.3.1", "scikit-image==0.21.0", "scipy==1.11.2", "seaborn==0.12.1", "semantic-version==2.10.0", "send2trash==1.8.2", "sentencepiece==0.1.99", "setuptools==63.2.0", "six==1.16.0", "smmap==5.0.0", "sniffio==1.3.0", "sounddevice==0.4.6", "soupsieve==2.5", "stack-data==0.6.2", "starlette==0.26.1", "sympy==1.12", "tb-nightly==2.15.0a20230908", "tensorboard-data-server==0.7.1", "tensorboard==2.13.0", "tensorflow-estimator==2.13.0", "tensorflow-intel==2.13.0", "tensorflow-io-gcs-filesystem==0.31.0", "tensorflow==2.13.0", "termcolor==2.3.0", "tifffile==2023.8.30", "timm==0.9.2", "tokenizers==0.13.3", "tomesd==0.1.3", "tomli==2.0.1", "toolz==0.12.0", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.1.1", "torchsde==0.2.5", "torchvision==0.15.2+cu118", "tqdm==4.66.1", "traitlets==5.10.0", "trampoline==0.1.2", "transformers==4.30.2", "typing-extensions==4.5.0", "tzdata==2023.3", "ultralytics==8.0.183", "urllib3==1.26.16", "uvicorn==0.23.2", "wcwidth==0.2.6", "websockets==11.0.3", "werkzeug==2.3.7", "wheel==0.41.2", "wrapt==1.15.0", "xformers==0.0.20", "yapf==0.40.1", "yarl==1.9.2", "zipp==3.16.2" ] }

What browsers do you use to access the UI ?

Mozilla Firefox

Console logs

venv "B:\ASSD16\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
loading Smart Crop reqs from B:\ASSD16\stable-diffusion-webui\extensions\sd_smartprocess\requirements.txt
Checking Smart Crop requirements.
loading WD14-tagger reqs from B:\ASSD16\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\requirements.txt
Checking WD14-tagger requirements.
Launching Web UI with arguments: --medvram --opt-split-attention --xformers
[-] ADetailer initialized. version: 23.9.3, num models: 9
*** Error loading script: main.py
    Traceback (most recent call last):
      File "B:\ASSD16\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "B:\ASSD16\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "B:\ASSD16\stable-diffusion-webui\extensions\sd_smartprocess\scripts\main.py", line 3, in <module>
        from extensions.sd_smartprocess import smartprocess
      File "B:\ASSD16\stable-diffusion-webui\extensions\sd_smartprocess\smartprocess.py", line 15, in <module>
        from extensions.sd_smartprocess.clipinterrogator import ClipInterrogator
      File "B:\ASSD16\stable-diffusion-webui\extensions\sd_smartprocess\clipinterrogator.py", line 14, in <module>
        from models.blip import blip_decoder, BLIP_Decoder
      File "B:\ASSD16\stable-diffusion-webui\repositories\BLIP\models\blip.py", line 11, in <module>
        from models.vit import VisionTransformer, interpolate_pos_embed
      File "B:\ASSD16\stable-diffusion-webui\repositories\BLIP\models\vit.py", line 21, in <module>
        from fairscale.nn.checkpoint.checkpoint_activations import checkpoint_wrapper
    ModuleNotFoundError: No module named 'fairscale'

---
== WD14 tagger /gpu:0, uname_result(system='Windows', node='DESKTOP-ADU5MU2', release='10', version='10.0.22621', machine='AMD64') ==
Loading weights [e8272d6146] from B:\ASSD16\stable-diffusion-webui\models\Stable-diffusion\NuclearAnimeSDXLDiffusion.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: B:\ASSD16\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 17.3s (prepare environment: 6.5s, import torch: 4.9s, import gradio: 0.7s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 2.5s, create ui: 0.6s, gradio launch: 0.6s).
Loading VAE weights specified in settings: B:\ASSD16\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 5.5s (load weights from disk: 1.2s, create model: 0.5s, apply weights to model: 1.1s, load VAE: 0.1s, calculate empty prompt: 2.5s).
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
loading WD14-tagger reqs from B:\ASSD16\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\requirements.txt
Checking WD14-tagger requirements.
Launching Web UI with arguments: --medvram --opt-split-attention --xformers
[-] ADetailer initialized. version: 23.9.3, num models: 9
== WD14 tagger /gpu:0, uname_result(system='Windows', node='DESKTOP-ADU5MU2', release='10', version='10.0.22621', machine='AMD64') ==
Loading weights [e8272d6146] from B:\ASSD16\stable-diffusion-webui\models\Stable-diffusion\NuclearAnimeSDXLDiffusion.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.4s (prepare environment: 4.8s, import torch: 4.9s, import gradio: 0.7s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 2.9s, create ui: 0.6s, gradio launch: 0.2s).
Creating model from config: B:\ASSD16\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: B:\ASSD16\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 5.0s (load weights from disk: 1.4s, create model: 0.3s, apply weights to model: 1.0s, load VAE: 0.1s, calculate empty prompt: 2.1s).
Preprocessing [Image 0/31]:   0%|                                                               | 0/31 [00:04<?, ?it/s]
*** Error completing request
*** Arguments: ('task(f29xaxc78g6xmru)', 'C:\\Users\\redacted\\Desktop\\GITS\\95', 'C:\\Users\\redacted\\Desktop\\GITS\\95\\proc', 1024, 1024, 'ignore', False, False, False, False, False, 0.5, 0.2, True, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1) {}
    Traceback (most recent call last):
      File "B:\ASSD16\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "B:\ASSD16\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\ui.py", line 19, in preprocess
        modules.textual_inversion.preprocess.preprocess(*args)
      File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\preprocess.py", line 18, in preprocess
        preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_keep_original_size, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug, process_multicrop, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold)
      File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\preprocess.py", line 212, in preprocess_work
        for focal in autocrop.crop_image(img, autocrop_settings):
      File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py", line 32, in crop_image
        focus = focal_point(im_debug, settings)
      File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py", line 75, in focal_point
        face_points = image_face_points(im, settings) if settings.face_points_weight > 0 else []
      File "B:\ASSD16\stable-diffusion-webui\modules\textual_inversion\autocrop.py", line 150, in image_face_points
        faces = detector.detect(np.array(im))
    cv2.error: OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\net_impl.cpp:279: error: (-204:Requested object was not found) Layer with requested id=-1 not found in function 'cv::dnn::dnn4_v20230620::Net::Impl::getLayerData'

---

Additional information

No response

tlegower commented 1 year ago

The issue is with OpenCV 4.8. You need to use an older OpenCV, like 4.7. However, if you are running the extension ControlNet, during the load sequence for the webui, it will uninstall OpenCV 4.7 and install 4.8.

I fixed this issue by:

Turn off ControlNet extension in Webui pip uninstall OpenCV-Python pip install OpenCV-Python==4.7.0.72 Then I ran launch and training with face focal point is running.

captainzero93 commented 12 months ago

this isn't really a fix though because I need controlnet and would have to go into the venv every time and keep changing openCV versions, maybe worth also posting this issue to controlnet extention

kurilee commented 11 months ago

https://github.com/opencv/opencv_zoo/blob/main/models/face_detection_yunet/face_detection_yunet_2023mar.onnx update to this version manually

NaughtDZ commented 10 months ago

https://github.com/opencv/opencv_zoo/blob/main/models/face_detection_yunet/face_detection_yunet_2023mar.onnx update to this version manually

It works! Unfortunately, as of the latest version, the latest onnx has not been updated and still needs to be manually replaced.