Bing-su / adetailer

Auto detecting, masking and inpainting with detection model.
GNU Affero General Public License v3.0
4.09k stars 318 forks source link

[Bug]: Installing in SD.Next via repository (extension tab) no longer downloads models #501

Closed MysticDaedra closed 6 months ago

MysticDaedra commented 7 months ago

Describe the bug

I just re-installed adetailer, and the models are not downloaded anymore it seems. In the code description it says that it is unnecessary to download any huggingface models, but either this is incorrect or there is a bug in the installer.

Screenshots

No response

Console logs, from start to end.

Using VENV: D:\automatic\venv
14:39:21-881765 INFO     Starting SD.Next
14:39:21-884765 INFO     Logger: file="D:\automatic\sdnext.log" level=DEBUG size=65 mode=create
14:39:21-884765 INFO     Python 3.10.6 on Windows
14:39:22-384725 INFO     Version: app=sd.next updated=2024-02-10 hash=3c952675
                         url=https://github.com/vladmandic/automatic/tree/master
14:39:22-913605 INFO     Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.6
14:39:22-914606 DEBUG    Setting environment tuning
14:39:22-916606 DEBUG    HF cache folder: C:\Users\Joshua\.cache\huggingface\hub
14:39:22-920064 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
14:39:22-921063 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
14:39:22-925065 INFO     nVidia CUDA toolkit detected: nvidia-smi present
14:39:31-128429 WARNING  Modified files: ['scripts/detect_extension.py']
14:39:31-256186 DEBUG    Repository update time: Sat Feb 10 02:42:56 2024
14:39:31-257197 INFO     Startup: standard
14:39:31-258479 INFO     Verifying requirements
14:39:31-262479 INFO     Verifying packages
14:39:31-263481 INFO     Verifying submodules
14:39:39-540490 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
14:39:39-800936 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
14:39:40-053779 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
14:39:40-306504 DEBUG    Submodule: extensions-builtin/sd-webui-controlnet / main
14:39:40-590776 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
14:39:40-870291 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
14:39:41-132410 DEBUG    Submodule: modules/k-diffusion / master
14:39:41-382085 DEBUG    Submodule: modules/lora / main
14:39:41-640476 DEBUG    Submodule: wiki / master
14:39:41-814604 DEBUG    Register paths
14:39:41-966559 DEBUG    Installed packages: 295
14:39:41-968062 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg']
14:39:42-287849 DEBUG    Running extension installer:
                         D:\automatic\extensions-builtin\sd-extension-system-info\install.py
14:39:42-839494 DEBUG    Running extension installer:
                         D:\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
14:39:43-369515 DEBUG    Running extension installer: D:\automatic\extensions-builtin\sd-webui-controlnet\install.py
14:39:43-937477 DEBUG    Running extension installer:
                         D:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
14:39:44-472053 DEBUG    Running extension installer:
                         D:\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
14:39:45-053197 DEBUG    Extensions all: ['adetailer', 'sd-webui-infinite-image-browsing']
14:39:45-055198 DEBUG    Running extension installer: D:\automatic\extensions\adetailer\install.py
14:39:45-635523 DEBUG    Running extension installer:
                         D:\automatic\extensions\sd-webui-infinite-image-browsing\install.py
14:39:53-387027 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg', 'adetailer', 'sd-webui-infinite-image-browsing']
14:39:53-388028 INFO     Verifying requirements
14:39:53-392029 DEBUG    Setup complete without errors: 1707691193
14:39:53-407052 INFO     Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.01}
14:39:53-408053 DEBUG    Starting module: <module 'webui' from 'D:\\automatic\\webui.py'>
14:39:53-410053 INFO     Command line args: ['--debug'] debug=True
14:39:53-411053 DEBUG    Env flags: []
14:39:56-119462 DEBUG    Package not found: olive-ai
14:39:58-156935 INFO     Load packages: {'torch': '2.2.0+cu121', 'diffusers': '0.26.2', 'gradio': '3.43.2'}
14:39:59-470522 DEBUG    Read: file="config.json" json=65 bytes=3922 time=0.000
14:39:59-472522 DEBUG    Unknown settings: ['ad_max_models', 'civitai_link_key', 'multiple_tqdm',
                         'ad_same_seed_for_each_tap', 'mudd_states', 'civitai_folder_lyco', 'image_browser_active_tabs']
14:39:59-474523 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="xFormers" mode=no_grad
14:39:59-607043 INFO     Device: device=NVIDIA GeForce RTX 3070 n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801
                         driver=551.23
14:40:02-051454 INFO     ONNX: selected=CUDAExecutionProvider, available=['TensorrtExecutionProvider',
                         'CUDAExecutionProvider', 'CPUExecutionProvider']
14:40:02-204669 DEBUG    Importing LDM
14:40:02-234575 DEBUG    Entering start sequence
14:40:02-237078 DEBUG    Initializing
14:40:02-271529 INFO     Available VAEs: path="D:\Stable Diffusion Files\Models\VAE" items=1
14:40:02-273530 INFO     Disabled extensions: ['sd-webui-controlnet']
14:40:02-275531 DEBUG    Scanning diffusers cache: D:\Stable Diffusion Files\Models\Diffusers D:\Stable Diffusion
                         Files\Models\Diffusers items=3 time=0.00
14:40:02-279035 DEBUG    Read: file="cache.json" json=2 bytes=80247 time=0.002
14:40:02-284036 DEBUG    Read: file="metadata.json" json=387 bytes=1058048 time=0.004
14:40:02-286539 INFO     Available models: path="D:\Stable Diffusion Files\Models\Checkpoints" items=8 time=0.01
14:40:02-408611 DEBUG    Load extensions
14:40:02-568877 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m14:40:02-565463[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m19[0m
                         [33mfolders[0m=[1;36m2[0m
14:40:03-145537 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
14:40:04-156527 INFO     Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized.
                         version: 24.1.2, num models: 9
14:40:04-256463 INFO     Extensions init time: 1.85 img2imgalt.py=0.11 sd-extension-chainner=0.06
                         sd-webui-agent-scheduler=0.51 stable-diffusion-webui-images-browser=0.42 adetailer=0.59
                         sd-webui-infinite-image-browsing=0.09
14:40:04-281492 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.001
14:40:04-282492 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
14:40:04-285494 DEBUG    chaiNNer models: path="D:\Stable Diffusion Files\Models\chaiNNer" defined=24 discovered=3
                         downloaded=5
14:40:04-287557 DEBUG    Upscaler type=ESRGAN folder="D:\Stable Diffusion Files\Models\ESRGAN"
                         model="4x_foolhardy_Remacri" path="D:\Stable Diffusion
                         Files\Models\ESRGAN\4x_foolhardy_Remacri.pth"
14:40:04-288558 DEBUG    Upscaler type=ESRGAN folder="D:\Stable Diffusion Files\Models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="D:\Stable Diffusion Files\Models\ESRGAN\4x_NMKD-Siax_200k.pth"
14:40:04-290559 DEBUG    Upscaler type=SwinIR folder="D:\Stable Diffusion Files\Models\SwinIR" model="SwinIR_4x"
                         path="D:\Stable Diffusion Files\Models\SwinIR\SwinIR_4x.pth"
14:40:04-293559 DEBUG    Load upscalers: total=58 downloaded=18 user=6 time=0.04 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
14:40:04-312453 DEBUG    Load styles: folder="D:\Stable Diffusion Files\Models\Styles" items=297 time=0.02
14:40:04-316454 DEBUG    Creating UI
14:40:04-317558 INFO     Load UI theme: name="invoked" style=Auto base=sdnext.css
14:40:04-326559 DEBUG    UI initialize: txt2img
14:40:04-334614 DEBUG    List items: function=create_items
14:40:04-341685 DEBUG    Read: file="html\reference.json" json=36 bytes=19033 time=0.000
14:40:04-371510 DEBUG    Extra networks: page='model' items=44 subfolders=3 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\Checkpoints', 'D:\\Stable Diffusion Files\\Models\\Diffusers',
                         'models\\Reference'] list=0.03 thumb=0.01 desc=0.00 info=0.00 workers=4
14:40:04-399034 DEBUG    Extra networks: page='style' items=297 subfolders=1 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\Styles', 'html'] list=0.03 thumb=0.00 desc=0.00 info=0.00 workers=4
14:40:04-403035 DEBUG    Extra networks: page='embedding' items=20 subfolders=0 tab=txt2img folders=['D:\\Stable
                         Diffusion Files\\Models\\Embeddings'] list=0.03 thumb=0.00 desc=0.00 info=0.01 workers=4
14:40:04-405035 DEBUG    Extra networks: page='hypernetwork' items=0 subfolders=0 tab=txt2img folders=['D:\\Stable
                         Diffusion Files\\Models\\Hypernetworks'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
14:40:04-408010 DEBUG    Extra networks: page='vae' items=1 subfolders=0 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\VAE'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
14:40:04-412012 DEBUG    Extra networks: page='lora' items=19 subfolders=0 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\Loras', 'D:\\Stable Diffusion Files\\Models\\LyCORIS'] list=0.02 thumb=0.00
                         desc=0.00 info=0.02 workers=4
14:40:04-540950 DEBUG    UI initialize: img2img
14:40:04-804434 DEBUG    UI initialize: control models=D:\Stable Diffusion Files\Models\Control
14:40:05-064901 DEBUG    Read: file="ui-config.json" json=104 bytes=6288 time=0.000
14:40:05-313678 DEBUG    Themes: builtin=11 default=5 external=55
14:40:09-253068 DEBUG    Extension list: processed=329 installed=9 enabled=8 disabled=1 visible=329 hidden=0
14:40:09-424833 DEBUG    Root paths: ['D:\\automatic']
14:40:09-754981 INFO     Local URL: http://127.0.0.1:7860/
14:40:09-756484 DEBUG    Gradio functions: registered=3319
14:40:09-756484 INFO     Initializing middleware
14:40:09-762048 DEBUG    Creating API
14:40:09-940661 INFO     [AgentScheduler] Task queue is empty
14:40:09-942662 INFO     [AgentScheduler] Registering APIs
14:40:10-131434 DEBUG    Scripts setup: ['X/Y/Z Grid:0.01', 'Face:0.009', 'AnimateDiff:0.005', 'ADetailer:0.196']
14:40:10-132434 DEBUG    Model metadata: file="metadata.json" no changes
14:40:10-134435 DEBUG    Model requested: fn=<lambda>
14:40:10-135434 INFO     Select: model="socababesTurboXL_v12Hybrid [924da13ca2]"
14:40:10-136435 DEBUG    Load model: existing=False target=D:\Stable Diffusion
                         Files\Models\Checkpoints\socababesTurboXL_v12Hybrid.safetensors info=None
14:40:10-346404 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
14:40:10-347416 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16
                         context=inference_mode fp16=True bf16=None
14:40:10-348416 DEBUG    Diffusers loading: path="D:\Stable Diffusion
                         Files\Models\Checkpoints\socababesTurboXL_v12Hybrid.safetensors"
14:40:10-349416 INFO     Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline file="D:\Stable
                         Diffusion Files\Models\Checkpoints\socababesTurboXL_v12Hybrid.safetensors" size=6777MB
14:40:29-742675 DEBUG    Setting model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True,
                         'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16',
                         'extract_ema': True, 'requires_aesthetics_score': True, 'use_safetensors': True}
14:40:29-744675 DEBUG    Setting model: enable model CPU offload
14:40:29-761051 DEBUG    Setting model: enable VAE slicing
14:40:29-763052 DEBUG    Setting model: enable VAE tiling
14:40:30-193847 DEBUG    Setting model: enable fused projections
14:40:37-619151 INFO     Load embeddings: loaded=1 skipped=19 time=3.07
14:40:37-934284 DEBUG    GC: collected=7583 device=cuda {'ram': {'used': 10.0, 'total': 31.9}, 'gpu': {'used': 1.09,
                         'total': 8.0}, 'retries': 0, 'oom': 0} time=0.31
14:40:37-943789 INFO     Load model: time=27.48 load=27.48 native=1024 {'ram': {'used': 10.0, 'total': 31.9}, 'gpu':
                         {'used': 1.09, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:40:37-946292 DEBUG    Save: file="config.json" json=65 bytes=3815 time=0.002
14:40:37-947307 DEBUG    Unused settings: ['civitai_link_key', 'multiple_tqdm', 'mudd_states', 'civitai_folder_lyco']
14:40:37-948470 DEBUG    Script callback init time: image_browser.py:ui_tabs=0.95 system-info.py:app_started=0.07
                         task_scheduler.py:app_started=0.21
14:40:37-949470 INFO     Startup time: 44.53 torch=2.62 olive=0.08 gradio=2.04 libraries=4.05 extensions=1.85
                         face-restore=0.12 ui-en=0.38 ui-txt2img=0.11 ui-img2img=0.10 ui-control=0.05 ui-settings=0.33
                         ui-extensions=3.90 ui-defaults=0.09 launch=0.39 api=0.08 app-started=0.29 checkpoint=27.81
14:41:59-771889 DEBUG    Server: alive=True jobs=1 requests=1 uptime=120 memory=9.72/31.9 backend=Backend.DIFFUSERS
                         state=idle
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1437, in process_api
    result = await self.call_function(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1109, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\automatic\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "D:\automatic\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2134, in run_sync_in_worker_thread
    return await future
  File "D:\automatic\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "D:\automatic\venv\lib\site-packages\gradio\utils.py", line 641, in wrapper
    response = f(*args, **kwargs)
  File "D:\automatic\modules\ui_control_helpers.py", line 67, in display_units
    return (num_units * [gr.update(visible=True)]) + ((max_units - num_units) * [gr.update(visible=False)])
TypeError: can't multiply sequence by non-int of type 'NoneType'
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1437, in process_api
    result = await self.call_function(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1083, in call_function
    assert block_fn.fn, f"function with index {fn_index} not defined."
AssertionError: function with index 2042 not defined.
14:42:54-423847 DEBUG    Control settings: id="HED" scribble=None
14:42:54-424847 DEBUG    Control settings: id="Midas Depth Hybrid" bg_th=None
14:42:54-424847 DEBUG    Control settings: id="Midas Depth Hybrid" depth_and_normal=
14:42:54-426350 DEBUG    Control settings: id="MLSD" thr_v=
14:42:54-427365 DEBUG    Control settings: id="MLSD" thr_d=0.3
14:42:54-428365 DEBUG    Control settings: id="OpenPose" include_body=0
14:42:54-429365 DEBUG    Control settings: id="OpenPose" include_face=1
14:42:54-430366 DEBUG    Control settings: id="LineArt Realistic" coarse=4
14:42:54-431365 DEBUG    Control settings: id="Leres Depth" boost=None
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
14:42:54-432366 DEBUG    Control settings: id="Leres Depth" thr_a=4
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1245, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "D:\automatic\venv\lib\site-packages\gradio\components\number.py", line 182, in preprocess
    return self._round_to_precision(x, self.precision)
14:42:54-433367 DEBUG    Control settings: id="Leres Depth" thr_b=0.4
  File "D:\automatic\venv\lib\site-packages\gradio\components\number.py", line 122, in _round_to_precision
    return float(num)
ValueError: could not convert string to float: ''
14:42:54-434367 DEBUG    Control settings: id="MediaPipe Face" min_confidence=32
14:42:54-435367 DEBUG    Control settings: id="Canny" low_threshold=False
14:42:54-436367 DEBUG    Control settings: id="Canny" high_threshold=512
14:42:54-437601 DEBUG    Control settings: id="DWPose" model=512
14:42:54-438601 DEBUG    Control settings: id="DWPose" min_confidence=False
14:42:54-438601 DEBUG    Control settings: id="SegmentAnything" model=28
14:42:54-440104 DEBUG    Control settings: id="Edge" pf=False
14:42:54-441115 DEBUG    Control settings: id="Edge" mode=7
14:42:54-442115 DEBUG    Control settings: id="Marigold Depth" color_map=Use same checkpoint
14:42:54-443116 DEBUG    Control settings: id="Marigold Depth" denoising_steps=False
14:42:54-444115 DEBUG    Control settings: id="Marigold Depth" ensemble_size=Use same VAE
14:42:54-445115 DEBUG    Control settings: id="Depth Anything" color_map=False
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1245, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "D:\automatic\modules\gr_hijack.py", line 12, in gr_image_preprocess
    im = gradio.processing_utils.decode_base64_to_image(x)
  File "D:\automatic\venv\lib\site-packages\gradio\processing_utils.py", line 59, in decode_base64_to_image
    img = Image.open(BytesIO(base64.b64decode(image_encoded)))
  File "D:\automatic\venv\lib\site-packages\PIL\Image.py", line 3309, in open
    raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x00000228B7A2A430>
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1245, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "D:\automatic\modules\gr_hijack.py", line 12, in gr_image_preprocess
    im = gradio.processing_utils.decode_base64_to_image(x)
  File "D:\automatic\venv\lib\site-packages\gradio\processing_utils.py", line 59, in decode_base64_to_image
    img = Image.open(BytesIO(base64.b64decode(image_encoded)))
  File "D:\automatic\venv\lib\site-packages\PIL\Image.py", line 3309, in open
    raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x00000228B7E2D3A0>
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1437, in process_api
    result = await self.call_function(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1109, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\automatic\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "D:\automatic\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2134, in run_sync_in_worker_thread
    return await future
  File "D:\automatic\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "D:\automatic\venv\lib\site-packages\gradio\utils.py", line 641, in wrapper
    response = f(*args, **kwargs)
  File "D:\automatic\modules\ui_control_helpers.py", line 67, in display_units
    return (num_units * [gr.update(visible=True)]) + ((max_units - num_units) * [gr.update(visible=False)])
TypeError: can't multiply sequence by non-int of type 'NoneType'
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1245, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "D:\automatic\venv\lib\site-packages\gradio\components\number.py", line 182, in preprocess
    return self._round_to_precision(x, self.precision)
  File "D:\automatic\venv\lib\site-packages\gradio\components\number.py", line 122, in _round_to_precision
    return float(num)
ValueError: could not convert string to float: ''
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\routes.py", line 507, in predict
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1437, in process_api
    result = await self.call_function(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1083, in call_function
    assert block_fn.fn, f"function with index {fn_index} not defined."
AssertionError: function with index 2042 not defined.
14:43:59-963923 DEBUG    Server: alive=True jobs=1 requests=64 uptime=241 memory=9.74/31.9 backend=Backend.DIFFUSERS
                         state=idle
14:44:11-151269 INFO     MOTD: N/A
14:44:17-964268 DEBUG    Themes: builtin=11 default=5 external=55
14:44:18-623875 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64;
                         rv:122.0) Gecko/20100101 Firefox/122.0
14:46:00-213119 DEBUG    Server: alive=True jobs=1 requests=244 uptime=361 memory=9.76/31.9 backend=Backend.DIFFUSERS
                         state=idle
14:48:00-451637 DEBUG    Server: alive=True jobs=1 requests=268 uptime=481 memory=9.68/31.9 backend=Backend.DIFFUSERS
                         state=idle
14:49:59-714509 DEBUG    Server: alive=True jobs=1 requests=292 uptime=600 memory=9.68/31.9 backend=Backend.DIFFUSERS
                         state=idle
14:51:59-969960 DEBUG    Server: alive=True jobs=1 requests=316 uptime=721 memory=9.68/31.9 backend=Backend.DIFFUSERS
                         state=idle
14:52:30-586639 DEBUG    Paste prompt: type="current" prompt="face of young girl wearing kawaii makeup with thick
                         eyelashes and pink lips coated in glitter, purple nebula eyes <lora:KawaiiMakeupXLv2:1.0>
                         Negative prompt: ugly, deformed, duplicates, extra limbs, (bl0use:1.4), anime, cgi, 3d,
                         cartoon, (looking at camera, looking at viewer:1.6)
                         Steps: 8, Seed: 2402340607, Sampler: DPM SDE, CFG scale: 4, Size: 896x1024, Parser: Full
                         parser, Model: socababesTurboXL_v12Hybrid, Model hash: 924da13ca2, Backend: Diffusers, App:
                         SD.Next, Version: 3c95267, Operations: inpaint, Init image size: 896x1024, Init image hash:
                         5af16293, Resize scale: 1, Denoising strength: 0.25, Mask blur: 4, Mask alpha: 1, Mask invert:
                         0, Mask content: 1, Mask area: 1, Mask padding: 64, Hypertile VAE: 448, Hypertile UNet: 448,
                         Lora hashes: "KawaiiMakeupXLv2: 04f8db3e", Sampler options: karras, Pipeline:
                         StableDiffusionXLInpaintPipeline"
14:52:30-591640 DEBUG    Settings overrides: []
14:54:00-209484 DEBUG    Server: alive=True jobs=1 requests=435 uptime=841 memory=9.68/31.9 backend=Backend.DIFFUSERS
                         state=idle
14:55:59-509571 DEBUG    Server: alive=True jobs=1 requests=459 uptime=960 memory=9.68/31.9 backend=Backend.DIFFUSERS
                         state=idle
14:57:59-530126 DEBUG    Server: alive=True jobs=1 requests=484 uptime=1080 memory=9.67/31.9 backend=Backend.DIFFUSERS
                         state=idle
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1245, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "D:\automatic\venv\lib\site-packages\gradio\components\radio.py", line 170, in preprocess
    return [value for _, value in self.choices].index(x)
ValueError: '1' is not in list
Traceback (most recent call last):
  File "D:\automatic\venv\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
    output = await route_utils.call_process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "D:\automatic\venv\lib\site-packages\gradio\blocks.py", line 1245, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "D:\automatic\venv\lib\site-packages\gradio\components\radio.py", line 170, in preprocess
    return [value for _, value in self.choices].index(x)
ValueError: '1' is not in list
14:59:59-522426 DEBUG    Server: alive=True jobs=1 requests=572 uptime=1200 memory=9.68/31.9 backend=Backend.DIFFUSERS
                         state=idle
15:01:59-783284 DEBUG    Server: alive=True jobs=1 requests=596 uptime=1320 memory=9.67/31.9 backend=Backend.DIFFUSERS
                         state=idle
15:03:59-952316 DEBUG    Server: alive=True jobs=1 requests=620 uptime=1441 memory=9.67/31.9 backend=Backend.DIFFUSERS
                         state=idle

List of installed extensions

No response

Bing-su commented 7 months ago

Models are stored and reused in the HuggingFace cache directory.