machineminded / Fooocus-inswapper

Focus on prompting and generating with an inswapper integration
GNU General Public License v3.0
69 stars 14 forks source link

osx installation [newbie] #20

Open sitzbrau opened 6 months ago

sitzbrau commented 6 months ago

Read Troubleshoot

[x] I confirm that I have read the Troubleshoot guide before making this issue.

Describe the problem thank, i've followed the installations steps but the Inswapper tab doesn't appear:

Full Console Log

ERROR: Could not find a version that satisfies the requirement onnxruntime-gpu==1.17.0 (from versions: none)
ERROR: No matching distribution found for onnxruntime-gpu==1.17.0
DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063
(base) silvio@Silvios-iPro Fooocus-inswapper % .\venv\Scripts\activate     
zsh: command not found: .venvScriptsactivate
machineminded commented 6 months ago

Hello @sitzbrau thank you for the report. I would just like you to double check that you are cloning the correct repository first. Since you said the Inswapper tab is not available, makes me think you have cloned the base Fooocus repo. A failed configure.sh will not cause a hidden Inswapper tab.

In regards to usage on OSX, I believe the installation should be the same as the base repository. Please refer here:

https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac

Note that even the base repoistory is not intensively tested with OSX. I believe you should complete step one in the link above. From there, you can run configure.sh which will set up the python venv, configure inswapper, and download the appropriate models. You will want to activate the venv with source venv/bin/activate. Finally try to launch with python launch.py --disable-offload-from-vram (note this is taken from step seven on the above link).

Please check these things and try again, and let me know what issues you run into.

sitzbrau commented 6 months ago

Hello @sitzbrau thank you for the report. I would just like you to double check that you are cloning the correct repository first. Since you said the Inswapper tab is not available, makes me think you have cloned the base Fooocus repo. A failed configure.sh will not cause a hidden Inswapper tab.

In regards to usage on OSX, I believe the installation should be the same as the base repository. Please refer here:

https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac

Note that even the base repoistory is not intensively tested with OSX. I believe you should complete step one in the link above. From there, you can run configure.sh which will set up the python venv, configure inswapper, and download the appropriate models. You will want to activate the venv with source venv/bin/activate. Finally try to launch with python launch.py --disable-offload-from-vram (note this is taken from step seven on the above link).

Please check these things and try again, and let me know what issues you run into.

hi, thanks for your support i confirm that the repo is correct. as i runpython launch.pi i've got the following

Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Exception in thread Thread-3 (worker):
Traceback (most recent call last):
  File "/Users/silvio/miniconda3/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/Users/silvio/miniconda3/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/silvio/Fooocus/Fooocus-inswapper/modules/async_worker.py", line 47, in worker
    from modules.face_swap import perform_face_swap
  File "/Users/silvio/Fooocus/Fooocus-inswapper/modules/face_swap.py", line 6, in <module>
    from inswapper.swapper import process
  File "/Users/silvio/Fooocus/Fooocus-inswapper/inswapper/swapper.py", line 11, in <module>
    import insightface
ModuleNotFoundError: No module named 'insightface'
photomaker-v1.bin: 100%|███████████████████████████████████████████████████████████████| 934M/934M [00:08<00:00, 116MB/s]
/Users/silvio/miniconda3/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
Traceback (most recent call last):
  File "/Users/silvio/Fooocus/Fooocus-inswapper/launch.py", line 128, in <module>
    from webui import *
  File "/Users/silvio/Fooocus/Fooocus-inswapper/webui.py", line 19, in <module>
    import modules.instantid as instantid
  File "/Users/silvio/Fooocus/Fooocus-inswapper/modules/instantid.py", line 13, in <module>
    from insightface.app import FaceAnalysis
ModuleNotFoundError: No module named 'insightface'
machineminded commented 6 months ago

@sitzbrau Are you able to see the Inswapper tab now? For now can we try:

pip install --upgrade --force-reinstall insightface

and go from there. Maybe more errors but we'll get it working

sitzbrau commented 6 months ago

still can't run it on sagemaker lab. i've successfully activate the conda enviroment, enabled it, and installed requirements (twice with pip install -r requirements_versions.txt e configure.bat) then i tried to start with run.sh

(fooocus) studio-lab-user@default:~/Fooocus-inswapper$ bash run.bat
run.bat: line 1: @echo: command not found
run.bat: line 2: setlocal: command not found
run.bat: line 4: rem: command not found
run.bat: line 5: call: command not found
run.bat: line 7: rem: command not found
[System ARGV] ['launch.py', '%*']
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
Fooocus version: 2.1.865
usage: launch.py [-h] [--listen [IP]] [--port PORT] [--disable-header-check [ORIGIN]] [--web-upload-size WEB_UPLOAD_SIZE]
                 [--external-working-path PATH [PATH ...]] [--output-path OUTPUT_PATH] [--temp-path TEMP_PATH]
                 [--cache-path CACHE_PATH] [--in-browser] [--disable-in-browser] [--gpu-device-id DEVICE_ID]
                 [--async-cuda-allocation | --disable-async-cuda-allocation] [--disable-attention-upcast]
                 [--all-in-fp32 | --all-in-fp16] [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]
                 [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16] [--vae-in-cpu]
                 [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32] [--directml [DIRECTML_DEVICE]]
                 [--disable-ipex-hijack] [--preview-option [none,auto,fast,taesd]]
                 [--attention-split | --attention-quad | --attention-pytorch] [--disable-xformers]
                 [--always-gpu | --always-high-vram | --always-normal-vram | --always-low-vram | --always-no-vram | --always-cpu]
                 [--always-offload-from-vram] [--pytorch-deterministic] [--disable-server-log] [--debug-mode]
                 [--is-windows-embedded-python] [--disable-server-info] [--multi-user] [--share] [--preset PRESET]
                 [--language LANGUAGE] [--disable-offload-from-vram] [--theme THEME] [--disable-image-log] [--disable-analytics]
                 [--disable-preset-download] [--always-download-new-model]
launch.py: error: unrecognized arguments: %*

and with run.bat

(fooocus) studio-lab-user@default:~/Fooocus-inswapper$ sh run.sh
run.sh: 4: source: not found
[System ARGV] ['launch.py']
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
Fooocus version: 2.1.865
Traceback (most recent call last):
  File "/home/studio-lab-user/Fooocus-inswapper/launch.py", line 87, in <module>
    from modules import config
  File "/home/studio-lab-user/Fooocus-inswapper/modules/config.py", line 7, in <module>
    import modules.sdxl_styles
  File "/home/studio-lab-user/Fooocus-inswapper/modules/sdxl_styles.py", line 5, in <module>
    from modules.util import get_files_from_folder
  File "/home/studio-lab-user/Fooocus-inswapper/modules/util.py", line 6, in <module>
    import cv2
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module>
    bootstrap()
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
    native_module = importlib.import_module("cv2")
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libgthread-2.0.so.0: cannot open shared object file: No such file or directory

and with launch.py

(fooocus) studio-lab-user@default:~/Fooocus-inswapper$ python launch.py
[System ARGV] ['launch.py']
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
Fooocus version: 2.1.865
Traceback (most recent call last):
  File "/home/studio-lab-user/Fooocus-inswapper/launch.py", line 87, in <module>
    from modules import config
  File "/home/studio-lab-user/Fooocus-inswapper/modules/config.py", line 7, in <module>
    import modules.sdxl_styles
  File "/home/studio-lab-user/Fooocus-inswapper/modules/sdxl_styles.py", line 5, in <module>
    from modules.util import get_files_from_folder
  File "/home/studio-lab-user/Fooocus-inswapper/modules/util.py", line 6, in <module>
    import cv2
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module>
    bootstrap()
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
    native_module = importlib.import_module("cv2")
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libgthread-2.0.so.0: cannot open shared object file: No such file or directory

nothing worked :(

sitzbrau commented 6 months ago

What if you skip the run.sh and try python launch.py?

(fooocus) studio-lab-user@default:~/Fooocus-inswapper$ python launch.py
[System ARGV] ['launch.py']
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
Fooocus version: 2.1.865
Traceback (most recent call last):
  File "/home/studio-lab-user/Fooocus-inswapper/launch.py", line 87, in <module>
    from modules import config
  File "/home/studio-lab-user/Fooocus-inswapper/modules/config.py", line 7, in <module>
    import modules.sdxl_styles
  File "/home/studio-lab-user/Fooocus-inswapper/modules/sdxl_styles.py", line 5, in <module>
    from modules.util import get_files_from_folder
  File "/home/studio-lab-user/Fooocus-inswapper/modules/util.py", line 6, in <module>
    import cv2
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module>
    bootstrap()
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
    native_module = importlib.import_module("cv2")
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libgthread-2.0.so.0: cannot open shared object file: No such file or directory
machineminded commented 6 months ago

Can you check this? OSX may have different requirements. I just put the error message into Google:

https://stackoverflow.com/questions/62786028/importerror-libgthread-2-0-so-0-cannot-open-shared-object-file-no-such-file-o

sitzbrau commented 6 months ago

Can you check this? OSX may have different requirements. I just put the error message into Google:

https://stackoverflow.com/questions/62786028/importerror-libgthread-2-0-so-0-cannot-open-shared-object-file-no-such-file-o

fixed with conda install glib=2.51.0 -y now i'm stuck again because sagemaker lab hasn't enough space

ControlNetModel/config.json: 100%|████████████████████████████████████████████████████████████████| 1.38k/1.38k [00:00<00:00, 8.78MB/s]
/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 2502.14 MB. The target location /home/studio-lab-user/.cache/huggingface/hub only has 1981.10 MB free disk space.
  warnings.warn(
/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 2502.14 MB. The target location /home/studio-lab-user/.cache/huggingface/hub/models--InstantX--InstantID/blobs only has 1981.10 MB free disk space.
  warnings.warn(
/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 2502.14 MB. The target location InstantID/checkpoints only has 1981.10 MB free disk space.
  warnings.warn(
diffusion_pytorch_model.safetensors:  79%|████████████████████████████████████████████▎           | 1.98G/2.50G [00:21<00:05, 92.4MB/s]
Exception in thread Thread-4 (worker):
Traceback (most recent call last):
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/studio-lab-user/Fooocus-inswapper/modules/async_worker.py", line 49, in worker
    from modules.instantid import generate_instantid
ImportError: cannot import name 'generate_instantid' from 'modules.instantid' (/home/studio-lab-user/Fooocus-inswapper/modules/instantid.py)
Traceback (most recent call last):
  File "/home/studio-lab-user/Fooocus-inswapper/launch.py", line 128, in <module>
    from webui import *
  File "/home/studio-lab-user/Fooocus-inswapper/webui.py", line 19, in <module>
    import modules.instantid as instantid
  File "/home/studio-lab-user/Fooocus-inswapper/modules/instantid.py", line 39, in <module>
    hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/diffusion_pytorch_model.safetensors", local_dir="InstantID/checkpoints")
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1457, in hf_hub_download
    http_get(
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 527, in http_get
    temp_file.write(chunk)
  File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/tempfile.py", line 483, in func_wrapper
    return func(*args, **kwargs)
OSError: [Errno 28] No space left on device
sitzbrau commented 6 months ago

it's possible to disable the juggernaut checkpoint automatic download?

machineminded commented 6 months ago

it's possible to disable the juggernaut checkpoint automatic download?

There is a way - you will need to update the config text file. Please see:

https://github.com/lllyasviel/Fooocus/issues/1827

machineminded commented 6 months ago

You might have to trick Fooocus into thinking a model is available somehow. Maybe making a dummy safetensors file in the directory it expects it.

sitzbrau commented 6 months ago

so i moved to Colab where i have 100gb, and everything seems fine but it runs locally with a public ip address hot to run it on gradio or ngrok?

!python launch.py --listen

Running on local URL:  http://0.0.0.0:7865/

To create a public link, set `share=True` in `launch()`.
machineminded commented 6 months ago

Can you try the --share argument instead?

Looking at the jupiter notebook, it uses python launch.py --share. I know when I've used this it will give me a gradio link.

sitzbrau commented 6 months ago

Can you try the --share argument instead?

Looking at the jupiter notebook, it uses python launch.py --share. I know when I've used this it will give me a gradio link.

first, thank you so much for the support.

!python launch.py --share


[System ARGV] ['launch.py', '--share']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.1.865
Traceback (most recent call last):
  File "/content/drive/MyDrive/Fooocus-inswapper/launch.py", line 128, in <module>
    from webui import *
  File "/content/drive/MyDrive/Fooocus-inswapper/webui.py", line 1, in <module>
    import gradio as gr
ModuleNotFoundError: No module named 'gradio'
machineminded commented 6 months ago

Weird. No gradio? Let's try pip install gradio==3.41.2. Pulled that from the requirements_versions.txt file.

sitzbrau commented 6 months ago

Weird. No gradio? Let's try pip install gradio==3.41.2. Pulled that from the requirements_versions.txt file.

after several attempts finally is running on gradio! now let's see if colab disconnects the runtime after some times. thank you again

machineminded commented 6 months ago

You're welcome @sitzbrau :) Please let me know if you run into any other issues, otherwise I will close this issue in a few days. Thank you 🎉🎉

sitzbrau commented 6 months ago

You're welcome @sitzbrau :) Please let me know if you run into any other issues, otherwise I will close this issue in a few days. Thank you 🎉🎉

running on Colab Pro, with a source image, Inswapper enabled

2024-02-21 16:13:34.685522359 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libcublas.so.11: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-02-21 16:13:43.195886308 [E:onnxruntime:Default, provider_bridge_ort.cc:1546 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./checkpoints/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
2024-02-21 16:13:43.994733094 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libcublas.so.11: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-02-21 16:13:44.724004320 [E:onnxruntime:Default, provider_bridge_ort.cc:1546 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./checkpoints/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
2024-02-21 16:13:44.762567889 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libcublas.so.11: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-02-21 16:13:46.455412418 [E:onnxruntime:Default, provider_bridge_ort.cc:1546 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./checkpoints/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
2024-02-21 16:13:46.595872423 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libcublas.so.11: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-02-21 16:13:47.187870049 [E:onnxruntime:Default, provider_bridge_ort.cc:1546 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./checkpoints/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
2024-02-21 16:13:47.235245041 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libcublas.so.11: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-02-21 16:13:55.052182266 [E:onnxruntime:Default, provider_bridge_ort.cc:1546 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./checkpoints/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (320, 320)
Traceback (most recent call last):
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/async_worker.py", line 885, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/async_worker.py", line 837, in handler
    imgs = perform_face_swap(imgs, inswapper_source_image, inswapper_target_image_index)
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/face_swap.py", line 14, in perform_face_swap
    result_image = process([source_image], item, "-1", f"{int(inswapper_target_image_index)}", "../inswapper/checkpoints/inswapper_128.onnx")
  File "/content/drive/MyDrive/Fooocus-inswapper/inswapper/swapper.py", line 79, in process
    face_swapper = getFaceSwapModel(model_path)
  File "/content/drive/MyDrive/Fooocus-inswapper/inswapper/swapper.py", line 19, in getFaceSwapModel
    model = insightface.model_zoo.get_model(model_path)
  File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 91, in get_model
    assert osp.exists(model_file), 'model_file %s should exist'%model_file
AssertionError: model_file /content/drive/MyDrive/Fooocus-inswapper/inswapper/../inswapper/checkpoints/inswapper_128.onnx should exist
Total time: 271.72 seconds
machineminded commented 6 months ago

Did you run configure.sh? You could try this for now to workaround. Run the command in the Fooocus-inswapper folder:

git clone https://github.com/haofanwang/inswapper.git
cd inswapper
git clone https://huggingface.co/spaces/sczhou/CodeFormer
cd ..
cp -r inswapper/CodeFormer/CodeFormer/basicsr venv/lib/python*/site-packages/
cp -r inswapper/CodeFormer/CodeFormer/facelib venv/lib/python*/site-packages/
mkdir -p inswapper/checkpoints
wget https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx -O inswapper/checkpoints/inswapper_128.onnx
machineminded commented 6 months ago

I guess this will be different if you are running this in colab. The files would go into /usr/local/lib/python*/dist-packages instead of the venv directory. From my understanding Colab doesnt need a venv since it is already an isolated environment by default.

sitzbrau commented 6 months ago

Did you run configure.sh? You could try this for now to workaround. Run the command in the Fooocus-inswapper folder:

git clone https://github.com/haofanwang/inswapper.git
cd inswapper
git clone https://huggingface.co/spaces/sczhou/CodeFormer
cd ..
cp -r inswapper/CodeFormer/CodeFormer/basicsr venv/lib/python*/site-packages/
cp -r inswapper/CodeFormer/CodeFormer/facelib venv/lib/python*/site-packages/
mkdir -p inswapper/checkpoints
wget https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx -O inswapper/checkpoints/inswapper_128.onnx

i've done but it still gives me this when i use inswapper, any solution?

2024-02-22 09:25:51.666115398 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libnvinfer.so.8: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_max_tuning_duration_ms': '0', 'use_ep_level_unified_stream': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'gpu_external_alloc': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'has_user_compute_stream': '0', 'gpu_mem_limit': '18446744073709551615', 'device_id': '0'}}
find model: ./checkpoints/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
2024-02-22 09:25:53.888437710 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libnvinfer.so.8: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_max_tuning_duration_ms': '0', 'use_ep_level_unified_stream': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'gpu_external_alloc': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'has_user_compute_stream': '0', 'gpu_mem_limit': '18446744073709551615', 'device_id': '0'}}
find model: ./checkpoints/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
2024-02-22 09:25:53.970241601 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libnvinfer.so.8: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_max_tuning_duration_ms': '0', 'use_ep_level_unified_stream': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'gpu_external_alloc': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'has_user_compute_stream': '0', 'gpu_mem_limit': '18446744073709551615', 'device_id': '0'}}
find model: ./checkpoints/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
2024-02-22 09:25:54.221086126 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libnvinfer.so.8: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_max_tuning_duration_ms': '0', 'use_ep_level_unified_stream': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'gpu_external_alloc': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'has_user_compute_stream': '0', 'gpu_mem_limit': '18446744073709551615', 'device_id': '0'}}
find model: ./checkpoints/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
2024-02-22 09:25:54.279350992 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libnvinfer.so.8: cannot open shared object file: No such file or directory

*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:456 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(onnxruntime::python::PySessionOptions&, const ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_max_tuning_duration_ms': '0', 'use_ep_level_unified_stream': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'gpu_external_alloc': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'has_user_compute_stream': '0', 'gpu_mem_limit': '18446744073709551615', 'device_id': '0'}}
find model: ./checkpoints/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (320, 320)
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_max_tuning_duration_ms': '0', 'use_ep_level_unified_stream': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'gpu_external_alloc': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'has_user_compute_stream': '0', 'gpu_mem_limit': '18446744073709551615', 'device_id': '0'}}
inswapper-shape: [1, 3, 128, 128]
/usr/local/lib/python3.10/dist-packages/insightface/utils/transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
Source faces: 1
Target faces: 0
Replacing specific face(s) in the target image with specific face(s) from the source image
Traceback (most recent call last):
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/async_worker.py", line 885, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/async_worker.py", line 837, in handler
    imgs = perform_face_swap(imgs, inswapper_source_image, inswapper_target_image_index)
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/face_swap.py", line 14, in perform_face_swap
    result_image = process([source_image], item, "-1", f"{int(inswapper_target_image_index)}", "../inswapper/checkpoints/inswapper_128.onnx")
  File "/content/drive/MyDrive/Fooocus-inswapper/inswapper/swapper.py", line 163, in process
    raise Exception("Number of target indexes is greater than the number of faces in the target image")
Exception: Number of target indexes is greater than the number of faces in the target image
Total time: 32.13 seconds
sitzbrau commented 6 months ago

if i try to use just faceswap

Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.29 seconds
Total time: 25.16 seconds
Inswapper: DISABLED
PhotoMaker: DISABLED
InstantID: DISABLED
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 757202348213287789
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 60 - 30
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.25 seconds
[Fooocus] Encoding negative #1 ...
[Fooocus] Image processing ...
Detected 1 faces
Requested to load CLIPVisionModelWithProjection
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.50 seconds
Requested to load Resampler
Loading 1 new model
^C

image

machineminded commented 6 months ago

Is your input index set to zero on the inswapper tab?

sitzbrau commented 5 months ago

Is your input index set to zero on the inswapper tab?

nope, is set to 1

machineminded commented 5 months ago

Can you try zero?

bjorndunkel commented 5 months ago

@machineminded I tried to follow the instructions but got stuck, do you have a Mac Installation recommendation or how to start?

machineminded commented 5 months ago

@machineminded I tried to follow the instructions but got stuck, do you have a Mac Installation recommendation or how to start?

I would use the official repository's instructions for setup:

https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac

From there, you would want to install only the necessary stuff for inswapper, photomaker and instantid. If you just want to use inswapper, you could run the following after completing the steps from the official repo and activating the virtual environment:

# Clone the repository
git clone https://github.com/haofanwang/inswapper.git
cd inswapper

# Install Git LFS
git lfs install

# Clone the Hugging Face model
git clone https://huggingface.co/spaces/sczhou/CodeFormer

# Copy directories
echo "Copying basicsr"
cp -r inswapper/CodeFormer/CodeFormer/basicsr venv/lib/python*/site-packages/
echo "Copying facelib"
cp -r inswapper/CodeFormer/CodeFormer/facelib venv/lib/python*/site-packages/
sitzbrau commented 5 months ago

@machineminded I tried to follow the instructions but got stuck, do you have a Mac Installation recommendation or how to start?

I would use the official repository's instructions for setup:

https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac

From there, you would want to install only the necessary stuff for inswapper, photomaker and instantid. If you just want to use inswapper, you could run the following after completing the steps from the official repo and activating the virtual environment:

# Clone the repository
git clone https://github.com/haofanwang/inswapper.git
cd inswapper

# Install Git LFS
git lfs install

# Clone the Hugging Face model
git clone https://huggingface.co/spaces/sczhou/CodeFormer

Copy directories

echo "Copying basicsr" cp -r inswapper/CodeFormer/CodeFormer/basicsr venv/lib/python/site-packages/ echo "Copying facelib" cp -r inswapper/CodeFormer/CodeFormer/facelib venv/lib/python/site-packages/

/content/drive/MyDrive/Fooocus/inswapper Copying basicsr cp: cannot stat 'inswapper/CodeFormer/CodeFormer/basicsr': No such file or directory Copying facelib cp: cannot stat 'inswapper/CodeFormer/CodeFormer/facelib': No such file or directory



i can't see any venv folder
![image](https://github.com/machineminded/Fooocus-inswapper/assets/158448004/a4d788a9-f4ae-4b74-886a-3525a7a9e360)
machineminded commented 5 months ago

If you're running on osx, make sure you run configure.sh after completing the first step of the official doc's osx installation instructions. I don't have a Mac to test this on so I'm making my best effort guess to get it working.

In your case we are missing the venv entirely, so it needs to be created. Then the inswapper and CodeFormer repos need to be checked out then copied into the newly created venv folder site pakcages.

bjorndunkel commented 5 months ago

I stuck here too,

Bildschirmfoto 2024-02-29 um 10 59 07

I have fooocus and miniconda running great, and I like to install for inswapper, photomaker and instantid.

If I search for venv or site-packages I can find them inside the miniconda folder, but there are many of them.

Bildschirmfoto 2024-02-29 um 11 07 55 Bildschirmfoto 2024-02-29 um 11 11 59
sitzbrau commented 5 months ago

If you're running on osx, make sure you run configure.sh after completing the first step of the official doc's osx installation instructions. I don't have a Mac to test this on so I'm making my best effort guess to get it working.

In your case we are missing the venv entirely, so it needs to be created. Then the inswapper and CodeFormer repos need to be checked out then copied into the newly created venv folder site pakcages.

i'm running it on Google Colab Pro. Standard Fooocus runs smooth. image In Fooocus-inswapper as you can see, the repos of instantid seems be copied with your repo but i don't know what is the venv folder and how to create it.

machineminded commented 5 months ago

I'm confused - are you trying to install into OSX or Colab? I haven't worked on the colab stuff but I'm aware the current configure scripts don't work. I don't have Colab pro so I'm not able to test it easily. For OSX I would expect that a venv is created. In colab there is no need for a venv because the environment is already ephemeral, which is why you aren't seeing one.

sitzbrau commented 5 months ago

I'm confused - are you trying to install into OSX or Colab? I haven't worked on the colab stuff but I'm aware the current configure scripts don't work. I don't have Colab pro so I'm not able to test it easily. For OSX I would expect that a venv is created. In colab there is no need for a venv because the environment is already ephemeral, which is why you aren't seeing one.

as i said before i moved on colab, thinking it was a simpler environment basic functions are working but none of "addons" like faceswap,instantid, photomaker are working always same error

machineminded commented 5 months ago

Yeah - colab installation isn't working well. However the above error indicates that it can't find a face but at least it indicates the inswapper function is working. Did it work after using index 0?

machineminded commented 5 months ago

Also I noticed that the target image faces is zero from your log:

Source faces: 1
Target faces: 0
Replacing specific face(s) in the target image with specific face(s) from the source image
Traceback (most recent call last):
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/async_worker.py", line 885, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/async_worker.py", line 837, in handler
    imgs = perform_face_swap(imgs, inswapper_source_image, inswapper_target_image_index)
  File "/content/drive/MyDrive/Fooocus-inswapper/modules/face_swap.py", line 14, in perform_face_swap
    result_image = process([source_image], item, "-1", f"{int(inswapper_target_image_index)}", "../inswapper/checkpoints/inswapper_128.onnx")
  File "/content/drive/MyDrive/Fooocus-inswapper/inswapper/swapper.py", line 163, in process
    raise Exception("Number of target indexes is greater than the number of faces in the target image")
Exception: Number of target indexes is greater than the number of faces in the target image
Total time: 32.13 seconds

So it can't find a target face to apply the source face.

bjorndunkel commented 5 months ago

Not sure what I have wrong in my install?

Inswapper, PhotoMaker and InstantID show up but don't work any idea how to fix that?

(base) bjorn@MacBook-Pro-von-bjorn ~ % cd fooocus-inswapper (base) bjorn@MacBook-Pro-von-bjorn fooocus-inswapper % source venv/bin/activate (venv) (base) bjorn@MacBook-Pro-von-bjorn fooocus-inswapper % python launch.py --disable-offload-from-vram [System ARGV] ['launch.py', '--disable-offload-from-vram'] Python 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ] Fooocus version: 2.1.865 Total VRAM 36864 MB, total RAM 36864 MB Set vram state to: SHARED Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split /Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( Refiner unloaded. /Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( model_type EPS UNet ADM Dimension 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Base model loaded: /Users/bjorn/Fooocus-inswapper/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/Users/bjorn/Fooocus-inswapper/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [/Users/bjorn/Fooocus-inswapper/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/Users/bjorn/Fooocus-inswapper/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cpu, use_fp16 = False. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models /Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( 'NoneType' object has no attribute 'local_url' Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().

Inswapper: ENABLED PhotoMaker: DISABLED InstantID: DISABLED [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 7489577473606270259 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 60 - 30 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] girl running at the beach, intricate, elegant, highly detailed, sharp focus, beautiful, dynamic light, new classic, ambient, cinematic, directed, futuristic, stunning, shiny, bright, colorful, color, epic, inspired, cute, creative, awesome, attractive, best, pretty, smart, atmosphere, perfect,, quiet, relaxed, warm, lovely, awarded [Fooocus] Preparing Fooocus text #2 ... [Prompt Expansion] girl running at the beach, full vivid color, highly detailed, cinematic, elegant, intricate, innocent, inspired, stunning, thought, epic, sharp focus, beautiful, open background, amazing, elite, professional, awarded, creative, cool, awesome, illuminated, pretty, attractive, best, breathtaking, shining, perfect, smart, cheerful, pure, determined, radiant [Fooocus] Encoding positive #1 ... [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (896, 1152) Preparation time: 5.09 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 100%|███████████████████████████████████████████| 60/60 [04:29<00:00, 4.49s/it] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.40 seconds Inswapper: Target index: 0.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5 set det-size: (320, 320) Traceback (most recent call last): File "/Users/bjorn/Fooocus-inswapper/modules/async_worker.py", line 885, in worker handler(task) File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/modules/async_worker.py", line 837, in handler imgs = perform_face_swap(imgs, inswapper_source_image, inswapper_target_image_index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/modules/face_swap.py", line 14, in perform_face_swap result_image = process([source_image], item, "-1", f"{int(inswapper_target_image_index)}", "../inswapper/checkpoints/inswapper_128.onnx") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/inswapper/swapper.py", line 79, in process face_swapper = getFaceSwapModel(model_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/inswapper/swapper.py", line 19, in getFaceSwapModel model = insightface.model_zoo.get_model(model_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/insightface/model_zoo/model_zoo.py", line 91, in get_model assert osp.exists(model_file), 'model_file %s should exist'%model_file AssertionError: model_file /Users/bjorn/Fooocus-inswapper/inswapper/../inswapper/checkpoints/inswapper_128.onnx should exist Total time: 282.95 seconds

machineminded commented 5 months ago

@bjorndunkel In this particular case it is missing the inswapper_128.onnx file. Place it into <fooocus-root>/inswapper/checkpoints. You can find the download URL in configure.sh

bjorndunkel commented 5 months ago

thank you @machineminded 🙏 I tried and have a red text.

2024-03-13 07:25:04.418149 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running CoreML_11777492068329204276_6 node. Name:'CoreMLExecutionProvider_CoreML_11777492068329204276_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:71 InlinedVector<int64_t> (anonymous namespace)::GetStaticOutputShape(gsl::span<const int64_t>, gsl::span<const int64_t>, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,800,1}) and inferred shape ({3200,1}) have different ranks.

here is the full content:

Inswapper: ENABLED PhotoMaker: DISABLED InstantID: DISABLED [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 1320938008342542775 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 60 - 30 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Encoding positive #1 ... [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (896, 1152) Preparation time: 2.62 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 10.70 seconds 0%| | 0/60 [00:00<?, ?it/s]/Users/bjorn/Fooocus-inswapper/modules/anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.) s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True) 100%|███████████████████████████████████████████| 60/60 [04:31<00:00, 4.53s/it] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.43 seconds Inswapper: Target index: 0.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CoreMLExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'AzureExecutionProvider': {}, 'CoreMLExecutionProvider': {}} find model: ./checkpoints/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5 set det-size: (320, 320) /Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CoreMLExecutionProvider, AzureExecutionProvider, CPUExecutionProvider' warnings.warn( Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} inswapper-shape: [1, 3, 128, 128] 2024-03-13 07:25:04.418149 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running CoreML_11777492068329204276_6 node. Name:'CoreMLExecutionProvider_CoreML_11777492068329204276_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:71 InlinedVector<int64_t> (anonymous namespace)::GetStaticOutputShape(gsl::span<const int64_t>, gsl::span<const int64_t>, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,800,1}) and inferred shape ({3200,1}) have different ranks.

Traceback (most recent call last): File "/Users/bjorn/Fooocus-inswapper/modules/async_worker.py", line 885, in worker handler(task) File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/modules/async_worker.py", line 837, in handler imgs = perform_face_swap(imgs, inswapper_source_image, inswapper_target_image_index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/modules/face_swap.py", line 14, in perform_face_swap result_image = process([source_image], item, "-1", f"{int(inswapper_target_image_index)}", "../inswapper/checkpoints/inswapper_128.onnx") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/inswapper/swapper.py", line 85, in process target_faces = get_many_faces(face_analyser, target_img) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/inswapper/swapper.py", line 45, in get_many_faces face = face_analyser.get(frame) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/insightface/app/face_analysis.py", line 59, in get bboxes, kpss = self.det_model.detect(img, ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/insightface/model_zoo/retinaface.py", line 224, in detect scores_list, bboxes_list, kpss_list = self.forward(det_img, self.det_thresh) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/insightface/model_zoo/retinaface.py", line 152, in forward net_outs = self.session.run(self.output_names, {self.input_name : blob}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bjorn/Fooocus-inswapper/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run return self._sess.run(output_names, input_feed, run_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running CoreML_11777492068329204276_6 node. Name:'CoreMLExecutionProvider_CoreML_11777492068329204276_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:71 InlinedVector (anonymous namespace)::GetStaticOutputShape(gsl::span, gsl::span, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,800,1}) and inferred shape ({3200,1}) have different ranks.

Total time: 294.46 seconds ``

machineminded commented 5 months ago

This error is a bit out of my league. I need to get my hands on a Mac that I can test this on. I might be able to test it in Docker in Windows... I will mess with this later tonight hopefully.

bjorndunkel commented 5 months ago

That would be amazing, something I do wrong for sure.

Thanks for your help @machineminded

bjorndunkel commented 5 months ago

This error is a bit out of my league. I need to get my hands on a Mac that I can test this on. I might be able to test it in Docker in Windows... I will mess with this later tonight hopefully.

Did you had luck and time to have a look into this? I can also start from scratch with your instructions, step by step and copy the output.