Bing-su / adetailer

Auto detecting, masking and inpainting with detection model.
GNU Affero General Public License v3.0
4.17k stars 322 forks source link

Bug with mediapipe #702

Closed marc2608 closed 1 month ago

marc2608 commented 2 months ago

Describe the bug

Hi, all the models in adetailer work fine except for the 4 mediapipes (mediapipe_face_full; mediapipe_face_short; mediapipe_face_mesh; mediapipe_facemesh_eyes_only). None of them want to work. For example, in text-to-image the image starts to be generated, gets to 100%, then we see that the model doesn't perform a detection, then it stops and generates several error inserts on the top right of the webui A1111. Any ideas on what might go wrong?

Steps to reproduce

installation: version: [v1.10.1]   python: 3.10.6  •  torch: 2.1.2+cu121  •  xformers: 0.0.23.post1  •  gradio: 3.41.2 -generating an image - SDE kARRAS - using A DETAILER with a mediapipe model.

Screenshots

133229

Console logs, from start to end.

Already up to date.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
[Warning] face_checkpoints not detected! Please download it from Release!
[Warning] global_checkpoints not detected! Please download it from Release!
[Warning] face_landmarks not detected! Please download it from Release!
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.7.0.72 to 4.8.0
All models for DeOldify are already downloaded.
Installing yt-dlp for DeOldify extension.
Installing yt-dlp
Installing sd-webui-facefusion requirement: changing onnx version from 1.14.0 to 1.14.1
Faceswaplab : Use GPU requirements
Checking faceswaplab requirements
0.013452500104904175
[Warning] face_checkpoints not detected! Please download it from Release!
[Warning] global_checkpoints not detected! Please download it from Release!
[Warning] face_landmarks not detected! Please download it from Release!
CUDA 12.1
Requirement already satisfied: insightface==0.7.3 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from -r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (0.7.3)
Collecting onnx==1.14.0 (from -r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 2))
  Using cached onnx-1.14.0-cp310-cp310-win_amd64.whl.metadata (15 kB)
Requirement already satisfied: onnxruntime==1.15.0 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from -r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (1.15.0)
Collecting opencv-python==4.7.0.72 (from -r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 4))
  Using cached opencv_python-4.7.0.72-cp37-abi3-win_amd64.whl.metadata (18 kB)
Requirement already satisfied: ifnude in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from -r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 5)) (0.0.3)
Requirement already satisfied: cython in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from -r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 6)) (3.0.10)
Requirement already satisfied: numpy in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.26.2)
Requirement already satisfied: tqdm in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (4.66.4)
Requirement already satisfied: requests in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (2.32.3)
Requirement already satisfied: matplotlib in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (3.9.0)
Requirement already satisfied: Pillow in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (9.5.0)
Requirement already satisfied: scipy in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.13.1)
Requirement already satisfied: scikit-learn in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.5.0)
Requirement already satisfied: scikit-image in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (0.21.0)
Requirement already satisfied: easydict in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.13)
Requirement already satisfied: albumentations in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.4.3)
Requirement already satisfied: prettytable in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (3.10.0)
Requirement already satisfied: protobuf>=3.20.2 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from onnx==1.14.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 2)) (5.28.0)
Requirement already satisfied: typing-extensions>=3.6.2.1 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from onnx==1.14.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 2)) (4.12.2)
Requirement already satisfied: coloredlogs in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from onnxruntime==1.15.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (15.0.1)
Requirement already satisfied: flatbuffers in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from onnxruntime==1.15.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (24.3.25)
Requirement already satisfied: packaging in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from onnxruntime==1.15.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (24.1)
Requirement already satisfied: sympy in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from onnxruntime==1.15.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (1.12.1)
Requirement already satisfied: opencv-python-headless>=4.5.1.48 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from ifnude->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 5)) (4.10.0.82)
Requirement already satisfied: PyYAML in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from albumentations->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (6.0.1)
Requirement already satisfied: networkx>=2.8 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (3.3)
Requirement already satisfied: imageio>=2.27 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (2.34.1)
Requirement already satisfied: tifffile>=2022.8.12 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (2024.5.22)
Requirement already satisfied: PyWavelets>=1.1.1 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.6.0)
Requirement already satisfied: lazy_loader>=0.2 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (0.4)
Requirement already satisfied: joblib>=1.2.0 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from scikit-learn->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.4.2)
Requirement already satisfied: threadpoolctl>=3.1.0 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from scikit-learn->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (3.5.0)
Requirement already satisfied: humanfriendly>=9.1 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from coloredlogs->onnxruntime==1.15.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (10.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: cycler>=0.10 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (4.53.0)
Requirement already satisfied: kiwisolver>=1.3.1 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.4.5)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (3.1.2)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (2.9.0.post0)
Requirement already satisfied: wcwidth in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from prettytable->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (0.2.13)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from requests->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from requests->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (2.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from requests->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.26.18)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from requests->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (2024.6.2)
Requirement already satisfied: mpmath<1.4.0,>=1.1.0 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from sympy->onnxruntime==1.15.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (1.3.0)
Requirement already satisfied: colorama in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from tqdm->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (0.4.6)
Requirement already satisfied: pyreadline3 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime==1.15.0->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 3)) (3.4.1)
Requirement already satisfied: six>=1.5 in c:\users\marc2\desktop\a1111\system\python\lib\site-packages (from python-dateutil>=2.7->matplotlib->insightface==0.7.3->-r C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-roop\requirements.txt (line 1)) (1.16.0)
Using cached onnx-1.14.0-cp310-cp310-win_amd64.whl (13.3 MB)
Using cached opencv_python-4.7.0.72-cp37-abi3-win_amd64.whl (38.2 MB)
Installing collected packages: opencv-python, onnx
  Attempting uninstall: opencv-python
    Found existing installation: opencv-python 4.10.0.84
    Uninstalling opencv-python-4.10.0.84:
      Successfully uninstalled opencv-python-4.10.0.84
  Attempting uninstall: onnx
    Found existing installation: onnx 1.16.2
    Uninstalling onnx-1.16.2:
      Successfully uninstalled onnx-1.16.2
Successfully installed onnx-1.14.0 opencv-python-4.7.0.72
Launching Web UI with arguments: --theme dark --xformers --api --disable-safe-unpickle
2024-09-03 13:21:45.032883: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-03 13:21:45.812969: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.

[Warning] face_checkpoints not detected! Please download it from Release!
[Warning] global_checkpoints not detected! Please download it from Release!
[Warning] face_landmarks not detected! Please download it from Release!

**********Read forge sample code *********
[-] ADetailer initialized. version: 24.9.0, num models: 22
ControlNet preprocessor location: C:\Users\marc2\Desktop\A1111\webui\extensions\sd-webui-controlnet\annotator\downloads
2024-09-03 13:21:57,503 - ControlNet - INFO - ControlNet v1.1.455
[sd-webui-freeu] Controlnet support: *enabled*
INFO:scripts.incantation_base:Incantation: Debug scripts are disabled. Set INCANT_DEBUG environment variable to enable them.
INFO:incantation_base.py:Incantation: Debug scripts are disabled. Set INCANT_DEBUG environment variable to enable them.
C:\Users\marc2\Desktop\A1111\system\python\lib\site-packages\diffusers\models\transformers\transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
  deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
sd-webui-prompt-all-in-one background API service started successfully.
13:22:02 - ReActor - STATUS - Running v0.7.1-a2 on Device: CUDA
2024-09-03 13:22:02,920 - roop - INFO - roop v0.0.2
2024-09-03 13:22:02,951 - roop - INFO - roop v0.0.2
Loading weights [56a0a8d647] from C:\Users\marc2\Desktop\A1111\webui\models\Stable-diffusion\almostAnythingV10_almostAnythingV10.safetensors
WARNING:incantation_base.py:Module CFG Combiner does not implement get_xyz_axis_options
WARNING:scripts.incantation_base:Module CFG Combiner does not implement get_xyz_axis_options
Creating model from config: C:\Users\marc2\Desktop\A1111\webui\configs\v1-inference.yaml
[Temporal] [WARNING] Cannot load GlobalOptions from C:/Users/marc2/Desktop/A1111/webui/extensions/sd-webui-temporal/settings
Character Path:
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011C24A011B0>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Loading VAE weights specified in settings: C:\Users\marc2\Desktop\A1111\webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011C24A20340>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Embedding Model value: None
LORA Model value: None
[Temporal] [WARNING] Cannot load GlobalOptions from C:/Users/marc2/Desktop/A1111/webui/extensions/sd-webui-temporal/settings
2024-09-03 13:22:10,183 - ControlNet - INFO - ControlNet UI callback registered.
Character Path:
Applying attention optimization: xformers... done.
Model loaded in 6.4s (load weights from disk: 0.2s, create model: 0.5s, apply weights to model: 4.2s, load VAE: 0.9s, move model to device: 0.3s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.1s).
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011D37BFFF40>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011D37C21E40>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Embedding Model value: None
LORA Model value: None
git: 'submodule' is not a git command. See 'git --help'.
[openOutpaint-extension-submodule] failed to download update, check network
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\marc2/.insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\marc2/.insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\marc2/.insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\marc2/.insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\marc2/.insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Character Path:
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CCB1FF8E0>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CCB1FF550>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Embedding Model value: None
LORA Model value: None
Character Path:
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CCBBB3100>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CCBBB3FA0>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Embedding Model value: None
LORA Model value: None
Character Path:
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CDF5B88E0>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CDD68D510>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Embedding Model value: None
LORA Model value: None
Character Path:
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CDF5B8A60>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Failed to fetch models from Ollama: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000011CDF762170>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
Embedding Model value: None
LORA Model value: None
Startup time: 173.2s (prepare environment: 105.1s, import torch: 5.7s, import gradio: 1.0s, setup paths: 3.7s, initialize shared: 0.1s, other imports: 0.8s, load scripts: 15.0s, create ui: 15.4s, gradio launch: 2.8s, add APIs: 23.4s, app_started_callback: 0.1s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00,  7.00it/s]
postprocess_batch95%|██████████████████████████████████████████████████████████████▋   | 19/20 [00:01<00:00, 10.03it/s]
Appuyez sur une touche pour continuer...

List of installed extensions

LDSR
Lora
ScuNET
SwinIR
canvas-zoom-and-pan
extra-options-section
hypertile
mobile
postprocessing-for-training
prompt-bracket-checker
soft-inpainting
--sd-webui-ar-plus
--sd-webui-ar-plusplus
ABG_extension
CharacteristicGuidanceWebUI
DreamArtist-sd-webui-extension FreeU
IF_prompt_MKR
OneButtonPrompt
PBRemTools
Stable-Diffusion-Webui-Civitai-Helper
StylePile
Stylez
a-person-mask-generator adetailer
canvas-zoom
clip-interrogator-ext custom-hires-fix-for-automatic1111
loopback_scaler
model-keyword
multidiffusion-upscaler-for-automatic1111 next-diffusion-prompt-generator
openOutpaint
openOutpaint-webUI-extension
openpose-editor
sd-dynamic-prompts
sd-dynamic-thresholding
sd-webui-agentattention
sd-webui-cads
sd-webui-cd-tuner
sd-webui-cleaner
sd-webui-clear-screen
sd-webui-color-correction-extras
sd-webui-color-enhance sd-webui-controlnet
sd-webui-creaprompt
sd-webui-decadetw-auto-prompt-llm
sd-webui-deoldify
sd-webui-detail-daemon sd-webui-diffusion-cg
sd-webui-facefusion
sd-webui-faceswaplab
sd-webui-freeu
sd-webui-incantations
sd-webui-inpaint-anything
sd-webui-inpaint-background
sd-webui-kohya-hiresfix
sd-webui-lama-cleaner-masked-content
sd-webui-lcm
sd-webui-llul
sd-webui-negpip
sd-webui-neutral-prompt
sd-webui-old-photo-restoration
sd-webui-prompt-all-in-one
sd-webui-ranbooru
sd-webui-reactor
sd-webui-regional-prompter
sd-webui-roop
sd-webui-rpg-diffusionmaster
sd-webui-segment-anything
sd-webui-semantic-guidance
sd-webui-temporal
stable-diffusion-NPW
stable-diffusion-webui-Prompt_Generator
stable-diffusion-webui-conditioning-highres-fix
stable-diffusion-webui-promptgen
stable-diffusion-webui-randomize
stable-diffusion-webui-rembg
stable-diffusion-webui-sonar
stable-diffusion-webui-tokenizer
stable-diffusion-webui-wildcards
ultimate-upscale-for-automatic1111

dosubot[bot] commented 2 months ago

Hey, @marc2608! I'm here to assist you with any bugs, questions, or contributions. Let's work together to solve the issue at hand.

I found a similar closed issue: [Bug]: Can't use "ad_model": "mediapipe_face_all" which was last updated on June 30, 2023. The issue was due to a typo, and the correct model type is "mediapipe_face_full" [1].

The connection errors you're experiencing with the mediapipe models in adetailer are likely due to the mediapipe library not being imported at the top of the script. The mediapipe library is imported within the functions, which might be causing issues if the imports fail or if there are environment-specific problems.

To resolve this, ensure that the mediapipe library is correctly installed and import it at the top of the script. Here is the modified code with the mediapipe import at the top:

from __future__ import annotations

from functools import partial

import cv2
import numpy as np
from PIL import Image, ImageDraw
import mediapipe as mp  # Import mediapipe at the top

from adetailer import PredictOutput
from adetailer.common import create_bbox_from_mask, create_mask_from_bbox

def mediapipe_predict(
    model_type: str, image: Image.Image, confidence: float = 0.3
) -> PredictOutput:
    mapping = {
        "mediapipe_face_short": partial(mediapipe_face_detection, 0),
        "mediapipe_face_full": partial(mediapipe_face_detection, 1),
        "mediapipe_face_mesh": mediapipe_face_mesh,
        "mediapipe_face_mesh_eyes_only": mediapipe_face_mesh_eyes_only,
    }
    if model_type in mapping:
        func = mapping[model_type]
        try:
            return func(image, confidence)
        except Exception:
            return PredictOutput()
    msg = f"[-] ADetailer: Invalid mediapipe model type: {model_type}, Available: {list(mapping.keys())!r}"
    raise RuntimeError(msg)

def mediapipe_face_detection(
    model_type: int, image: Image.Image, confidence: float = 0.3
) -> PredictOutput[float]:
    img_width, img_height = image.size

    mp_face_detection = mp.solutions.face_detection
    draw_util = mp.solutions.drawing_utils

    img_array = np.array(image)

    with mp_face_detection.FaceDetection(
        model_selection=model_type, min_detection_confidence=confidence
    ) as face_detector:
        pred = face_detector.process(img_array)

    if pred.detections is None:
        return PredictOutput()

    preview_array = img_array.copy()

    bboxes = []
    for detection in pred.detections:
        draw_util.draw_detection(preview_array, detection)

        bbox = detection.location_data.relative_bounding_box
        x1 = bbox.xmin * img_width
        y1 = bbox.ymin * img_height
        w = bbox.width * img_width
        h = bbox.height * img_height
        x2 = x1 + w
        y2 = y1 + h

        bboxes.append([x1, y1, x2, y2])

    masks = create_mask_from_bbox(bboxes, image.size)
    preview = Image.fromarray(preview_array)

    return PredictOutput(bboxes=bboxes, masks=masks, preview=preview)

def mediapipe_face_mesh(
    image: Image.Image, confidence: float = 0.3
) -> PredictOutput[int]:
    mp_face_mesh = mp.solutions.face_mesh
    draw_util = mp.solutions.drawing_utils
    drawing_styles = mp.solutions.drawing_styles

    w, h = image.size

    with mp_face_mesh.FaceMesh(
        static_image_mode=True, max_num_faces=20, min_detection_confidence=confidence
    ) as face_mesh:
        arr = np.array(image)
        pred = face_mesh.process(arr)

        if pred.multi_face_landmarks is None:
            return PredictOutput()

        preview = arr.copy()
        masks = []

        for landmarks in pred.multi_face_landmarks:
            draw_util.draw_landmarks(
                image=preview,
                landmark_list=landmarks,
                connections=mp_face_mesh.FACEMESH_TESSELATION,
                landmark_drawing_spec=None,
                connection_drawing_spec=drawing_styles.get_default_face_mesh_tesselation_style(),
            )

            points = np.array(
                [[land.x * w, land.y * h] for land in landmarks.landmark], dtype=int
            )
            outline = cv2.convexHull(points).reshape(-1).tolist()

            mask = Image.new("L", image.size, "black")
            draw = ImageDraw.Draw(mask)
            draw.polygon(outline, fill="white")
            masks.append(mask)

        bboxes = create_bbox_from_mask(masks, image.size)
        preview = Image.fromarray(preview)
        return PredictOutput(bboxes=bboxes, masks=masks, preview=preview)

def mediapipe_face_mesh_eyes_only(
    image: Image.Image, confidence: float = 0.3
) -> PredictOutput[int]:
    mp_face_mesh = mp.solutions.face_mesh

    left_idx = np.array(list(mp_face_mesh.FACEMESH_LEFT_EYE)).flatten()
    right_idx = np.array(list(mp_face_mesh.FACEMESH_RIGHT_EYE)).flatten()

    w, h = image.size

    with mp_face_mesh.FaceMesh(
        static_image_mode=True, max_num_faces=20, min_detection_confidence=confidence
    ) as face_mesh:
        arr = np.array(image)
        pred = face_mesh.process(arr)

        if pred.multi_face_landmarks is None:
            return PredictOutput()

        preview = image.copy()
        masks = []

        for landmarks in pred.multi_face_landmarks:
            points = np.array(
                [[land.x * w, land.y * h] for land in landmarks.landmark], dtype=int
            )
            left_eyes = points[left_idx]
            right_eyes = points[right_idx]
            left_outline = cv2.convexHull(left_eyes).reshape(-1).tolist()
            right_outline = cv2.convexHull(right_eyes).reshape(-1).tolist()

            mask = Image.new("L", image.size, "black")
            draw = ImageDraw.Draw(mask)
            for outline in (left_outline, right_outline):
                draw.polygon(outline, fill="white")
            masks.append(mask)

        bboxes = create_bbox_from_mask(masks, image.size)
        preview = draw_preview(preview, bboxes, masks)
        return PredictOutput(bboxes=bboxes, masks=masks, preview=preview)

def draw_preview(
    preview: Image.Image, bboxes: list[list[int]], masks: list[Image.Image]
) -> Image.Image:
    red = Image.new("RGB", preview.size, "red")
    for mask in masks:
        masked = Image.composite(red, preview, mask)
        preview = Image.blend(preview, masked, 0.25)

    draw = ImageDraw.Draw(preview)
    for bbox in bboxes:
        draw.rectangle(bbox, outline="red", width=2)

    return preview

Ensure that the mediapipe library is installed in your environment using:

pip install mediapipe

Additionally, you can use the provided test script to verify that the mediapipe models are working correctly:

import pytest
from PIL import Image

from adetailer.mediapipe import mediapipe_predict

@pytest.mark.parametrize(
    "model_name",
    [
        "mediapipe_face_short",
        "mediapipe_face_full",
        "mediapipe_face_mesh",
        "mediapipe_face_mesh_eyes_only",
    ],
)
def test_mediapipe(sample_image2: Image.Image, model_name: str):
    result = mediapipe_predict(model_name, sample_image2)
    assert result.preview is not None

This should help resolve the connection errors related to the mediapipe models in adetailer [2][3].

To continue talking to Dosu, mention @dosu.

marc2608 commented 2 months ago

I've made all the verifications (except for the script because I don't know how to use it) and unfortunately the problem is the same. Do you have any other clues? Thank you very much.

dosubot[bot] commented 2 months ago

After verifying the installation and imports, you can take the following additional steps to resolve connection errors with the mediapipe models in adetailer:

  1. Check Model Type: Ensure that the model_type you are passing to the mediapipe_predict function is valid. The valid model types are:

    • "mediapipe_face_short"
    • "mediapipe_face_full"
    • "mediapipe_face_mesh"
    • "mediapipe_face_mesh_eyes_only"

    Example:

    result = mediapipe_predict("mediapipe_face_full", image, confidence=0.5)
  2. Verify Image Format: Ensure that the image being passed to the function is in the correct format (PIL.Image.Image). You can convert an image to the required format using:

    from PIL import Image
    image = Image.open("path_to_image.jpg")
  3. Check for Exceptions: The mediapipe_predict function catches exceptions and returns an empty PredictOutput. You can add logging to capture more details about the exceptions:

    import logging
    
    def mediapipe_predict(
       model_type: str, image: Image.Image, confidence: float = 0.3
    ) -> PredictOutput:
       mapping = {
           "mediapipe_face_short": partial(mediapipe_face_detection, 0),
           "mediapipe_face_full": partial(mediapipe_face_detection, 1),
           "mediapipe_face_mesh": mediapipe_face_mesh,
           "mediapipe_face_mesh_eyes_only": mediapipe_face_mesh_eyes_only,
       }
       if model_type in mapping:
           func = mapping[model_type]
           try:
               return func(image, confidence)
           except Exception as e:
               logging.error(f"Error in mediapipe_predict: {e}")
               return PredictOutput()
       msg = f"[-] ADetailer: Invalid mediapipe model type: {model_type}, Available: {list(mapping.keys())!r}"
       raise RuntimeError(msg)
  4. Check Mediapipe Installation: Ensure that the mediapipe library is correctly installed and up-to-date. You can install or update it using:

    pip install mediapipe --upgrade
  5. Verify Dependencies: Ensure that all other dependencies (e.g., cv2, numpy, PIL) are correctly installed and up-to-date.

  6. Download Models: Ensure that the required models are downloaded correctly. You can use the download_models function to download the necessary models from the Hugging Face repository:

    from adetailer.common import download_models
    
    models = download_models(
       "face_yolov8n.pt",
       "face_yolov8s.pt",
       "hand_yolov8n.pt",
       "person_yolov8n-seg.pt",
       "person_yolov8s-seg.pt",
       "yolov8x-worldv2.pt",
       check_remote=True
    )
  7. Ensure Model Paths: Verify that the model paths are correctly set and accessible. You can use the get_models function to scan directories and ensure the models are available:

    from adetailer.common import get_models
    
    model_paths = get_models("/path/to/models", huggingface=True)

By following these steps, you should be able to diagnose and resolve connection errors with the mediapipe models in adetailer [1][2].

To continue talking to Dosu, mention @dosu.

marc2608 commented 2 months ago

here you can see the result in cmd with face_yolov8n.pt compared to the result with mediapipe_face_full. Capture d'écran 2024-09-03 171956 Capture d'écran 2024-09-03 172106

marc2608 commented 1 month ago

here is the content of my folder mediapipe.py. Please tell me if something is wrong and how to fix it. Thank you.

from future import annotations

from functools import partial

import cv2 import numpy as np from PIL import Image, ImageDraw

from adetailer import PredictOutput from adetailer.common import create_bbox_from_mask, create_mask_from_bbox

def mediapipe_predict( model_type: str, image: Image.Image, confidence: float = 0.3 ) -> PredictOutput: mapping = { "mediapipe_face_short": partial(mediapipe_face_detection, 0), "mediapipe_face_full": partial(mediapipe_face_detection, 1), "mediapipe_face_mesh": mediapipe_face_mesh, "mediapipe_face_mesh_eyes_only": mediapipe_face_mesh_eyes_only, } if model_type in mapping: func = mapping[model_type] try: return func(image, confidence) except Exception: return PredictOutput() msg = f"[-] ADetailer: Invalid mediapipe model type: {model_type}, Available: {list(mapping.keys())!r}" raise RuntimeError(msg)

def mediapipe_face_detection( model_type: int, image: Image.Image, confidence: float = 0.3 ) -> PredictOutput[float]: import mediapipe as mp

img_width, img_height = image.size

mp_face_detection = mp.solutions.face_detection
draw_util = mp.solutions.drawing_utils

img_array = np.array(image)

with mp_face_detection.FaceDetection(
    model_selection=model_type, min_detection_confidence=confidence
) as face_detector:
    pred = face_detector.process(img_array)

if pred.detections is None:
    return PredictOutput()

preview_array = img_array.copy()

bboxes = []
for detection in pred.detections:
    draw_util.draw_detection(preview_array, detection)

    bbox = detection.location_data.relative_bounding_box
    x1 = bbox.xmin * img_width
    y1 = bbox.ymin * img_height
    w = bbox.width * img_width
    h = bbox.height * img_height
    x2 = x1 + w
    y2 = y1 + h

    bboxes.append([x1, y1, x2, y2])

masks = create_mask_from_bbox(bboxes, image.size)
preview = Image.fromarray(preview_array)

return PredictOutput(bboxes=bboxes, masks=masks, preview=preview)

def mediapipe_face_mesh( image: Image.Image, confidence: float = 0.3 ) -> PredictOutput[int]: import mediapipe as mp

mp_face_mesh = mp.solutions.face_mesh
draw_util = mp.solutions.drawing_utils
drawing_styles = mp.solutions.drawing_styles

w, h = image.size

with mp_face_mesh.FaceMesh(
    static_image_mode=True, max_num_faces=20, min_detection_confidence=confidence
) as face_mesh:
    arr = np.array(image)
    pred = face_mesh.process(arr)

    if pred.multi_face_landmarks is None:
        return PredictOutput()

    preview = arr.copy()
    masks = []

    for landmarks in pred.multi_face_landmarks:
        draw_util.draw_landmarks(
            image=preview,
            landmark_list=landmarks,
            connections=mp_face_mesh.FACEMESH_TESSELATION,
            landmark_drawing_spec=None,
            connection_drawing_spec=drawing_styles.get_default_face_mesh_tesselation_style(),
        )

        points = np.array(
            [[land.x * w, land.y * h] for land in landmarks.landmark], dtype=int
        )
        outline = cv2.convexHull(points).reshape(-1).tolist()

        mask = Image.new("L", image.size, "black")
        draw = ImageDraw.Draw(mask)
        draw.polygon(outline, fill="white")
        masks.append(mask)

    bboxes = create_bbox_from_mask(masks, image.size)
    preview = Image.fromarray(preview)
    return PredictOutput(bboxes=bboxes, masks=masks, preview=preview)

def mediapipe_face_mesh_eyes_only( image: Image.Image, confidence: float = 0.3 ) -> PredictOutput[int]: import mediapipe as mp

mp_face_mesh = mp.solutions.face_mesh

left_idx = np.array(list(mp_face_mesh.FACEMESH_LEFT_EYE)).flatten()
right_idx = np.array(list(mp_face_mesh.FACEMESH_RIGHT_EYE)).flatten()

w, h = image.size

with mp_face_mesh.FaceMesh(
    static_image_mode=True, max_num_faces=20, min_detection_confidence=confidence
) as face_mesh:
    arr = np.array(image)
    pred = face_mesh.process(arr)

    if pred.multi_face_landmarks is None:
        return PredictOutput()

    preview = image.copy()
    masks = []

    for landmarks in pred.multi_face_landmarks:
        points = np.array(
            [[land.x * w, land.y * h] for land in landmarks.landmark], dtype=int
        )
        left_eyes = points[left_idx]
        right_eyes = points[right_idx]
        left_outline = cv2.convexHull(left_eyes).reshape(-1).tolist()
        right_outline = cv2.convexHull(right_eyes).reshape(-1).tolist()

        mask = Image.new("L", image.size, "black")
        draw = ImageDraw.Draw(mask)
        for outline in (left_outline, right_outline):
            draw.polygon(outline, fill="white")
        masks.append(mask)

    bboxes = create_bbox_from_mask(masks, image.size)
    preview = draw_preview(preview, bboxes, masks)
    return PredictOutput(bboxes=bboxes, masks=masks, preview=preview)

def draw_preview( preview: Image.Image, bboxes: list[list[int]], masks: list[Image.Image] ) -> Image.Image: red = Image.new("RGB", preview.size, "red") for mask in masks: masked = Image.composite(red, preview, mask) preview = Image.blend(preview, masked, 0.25)

draw = ImageDraw.Draw(preview)
for bbox in bboxes:
    draw.rectangle(bbox, outline="red", width=2)

return preview
dosubot[bot] commented 1 month ago

The issue with your mediapipe.py script causing the mediapipe_face_full model to not work could be due to an exception being raised within the mediapipe_face_detection function. The mediapipe_predict function catches all exceptions and returns an empty PredictOutput object, which might be why it appears that the model is not working.

To diagnose and fix the issue, you should add logging or print statements to capture the exception details. Here is an example of how you can modify the mediapipe_predict function to log the exception:

import logging

def mediapipe_predict(
    model_type: str, image: Image.Image, confidence: float = 0.3
) -> PredictOutput:
    mapping = {
        "mediapipe_face_short": partial(mediapipe_face_detection, 0),
        "mediapipe_face_full": partial(mediapipe_face_detection, 1),
        "mediapipe_face_mesh": mediapipe_face_mesh,
        "mediapipe_face_mesh_eyes_only": mediapipe_face_mesh_eyes_only,
    }
    if model_type in mapping:
        func = mapping[model_type]
        try:
            return func(image, confidence)
        except Exception as e:
            logging.error(f"Error in mediapipe model {model_type}: {e}")
            return PredictOutput()
    msg = f"[-] ADetailer: Invalid mediapipe model type: {model_type}, Available: {list(mapping.keys())!r}"
    raise RuntimeError(msg)

Additionally, you can use the provided test script to ensure that the mediapipe_face_full model works correctly. Here is the test script:

import pytest
from PIL import Image

from adetailer.mediapipe import mediapipe_predict

@pytest.mark.parametrize(
    "model_name",
    [
        "mediapipe_face_short",
        "mediapipe_face_full",
        "mediapipe_face_mesh",
        "mediapipe_face_mesh_eyes_only",
    ],
)
def test_mediapipe(sample_image2: Image.Image, model_name: str):
    result = mediapipe_predict(model_name, sample_image2)
    assert result.preview is not None

Run this test to verify that the mediapipe_face_full model is functioning as expected. If the test fails, the logged error messages will help you identify and fix the issue [1][2].

To continue talking to Dosu, mention @dosu.