huchenlei / sd-webui-controlnet-evaclip

EVA-CLIP preprocessor for sd-webui-controlnet
MIT License
27 stars 0 forks source link

about pip install apex #2

Open hben35096 opened 1 month ago

hben35096 commented 1 month ago

I follow this tip:

Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --port=6006 --xformers --theme=dark --enable-insecure-extension-access --ad-no-huggingface
[-] ADetailer initialized. version: 24.5.1, num models: 14
ControlNet preprocessor location: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2024-05-23 14:18:18,927 - ControlNet - INFO - ControlNet v1.1.449
Please 'pip install apex'

As a result, it was: image

Is it possible that the "apex" I installed is not the "apex" in the prompt?

hben35096 commented 1 month ago

It seems that what needs to be installed is: !git clone https://github.com/NVIDIA/apex.git

%cd /root/apex
!pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./

===================== Here's not what I need: if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple --config-settings with the same key... !pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./

But there are still:

Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --port=6006 --xformers --theme=dark --enable-insecure-extension-access --ad-no-huggingface
[-] ADetailer initialized. version: 24.5.1, num models: 14
ControlNet preprocessor location: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2024-05-23 15:09:58,377 - ControlNet - INFO - ControlNet v1.1.449
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [d4656d332f] from /root/autodl-tmp/stable-diffusion-webui/models/Stable-diffusion/sd_xl/dreamshaperXL_lightningDPMSDE.safetensors
2024-05-23 15:09:59,333 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://127.0.0.1:6006/
Creating model from config: /root/autodl-tmp/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml

To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 17.7s (prepare environment: 3.1s, import torch: 4.4s, import gradio: 1.7s, setup paths: 2.1s, initialize shared: 0.2s, other imports: 0.5s, list SD models: 1.2s, load scripts: 2.3s, create ui: 0.9s, gradio launch: 1.0s, app_started_callback: 0.3s).
Applying attention optimization: xformers... done.
Model loaded in 5.3s (load weights from disk: 1.1s, create model: 1.1s, apply weights to model: 2.2s, load textual inversion embeddings: 0.3s, calculate empty prompt: 0.3s).
2024-05-23 15:10:04,440 - ControlNet - INFO - unit_separate = False, style_align = False
2024-05-23 15:10:04,773 - ControlNet - INFO - Loading model: ip-adapter_pulid_sdxl_fp16 [d86d05ea]
2024-05-23 15:10:04,792 - ControlNet - INFO - Loaded state_dict from [/root/autodl-tmp/stable-diffusion-webui/models/ControlNet/ip-adapter_pulid_sdxl_fp16.safetensors]
2024-05-23 15:10:08,610 - ControlNet - INFO - ControlNet model ip-adapter_pulid_sdxl_fp16 [d86d05ea](ControlModelType.IPAdapter) loaded.
2024-05-23 15:10:08,618 - ControlNet - INFO - Using preprocessor: ip-adapter-auto
2024-05-23 15:10:08,618 - ControlNet - INFO - preprocessor resolution = 512
2024-05-23 15:10:08,619 - ControlNet - INFO - ip-adapter-auto => ip-adapter_pulid
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads/insightface/models/antelopev2/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads/insightface/models/antelopev2/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads/insightface/models/antelopev2/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads/insightface/models/antelopev2/glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads/insightface/models/antelopev2/scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
/root/miniconda3/lib/python3.10/site-packages/insightface/utils/transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
*** Error running process: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py
    Traceback (most recent call last):
      File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 825, in process
        script.process(p, *script_args)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1222, in process
        self.controlnet_hack(p)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1207, in controlnet_hack
        self.controlnet_main_entry(p)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 941, in controlnet_main_entry
        controls, hr_controls, additional_maps = get_control(
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 290, in get_control
        controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in optional_tqdm(input_images)]))
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 290, in <listcomp>
        controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in optional_tqdm(input_images)]))
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 242, in preprocess_input_image
        result = preprocessor.cached_call(
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/supported_preprocessor.py", line 196, in cached_call
        result = self._cached_call(input_image, *args, **kwargs)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/utils.py", line 82, in decorated_func
        return cached_func(*args, **kwargs)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/utils.py", line 66, in cached_func
        return func(*args, **kwargs)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/supported_preprocessor.py", line 209, in _cached_call
        return self(*args, **kwargs)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/preprocessor/ip_adapter_auto.py", line 25, in __call__
        return p(*args, **kwargs)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/preprocessor/pulid.py", line 157, in __call__
        r = evaclip_preprocessor(face_features_image)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/scripts/preprocessor_evaclip.py", line 67, in __call__
        self.load_model()
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/scripts/preprocessor_evaclip.py", line 37, in load_model
        self.model, _, _ = create_model_and_transforms(
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/eva_clip/factory.py", line 377, in create_model_and_transforms
        model = create_model(
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/eva_clip/factory.py", line 270, in create_model
        model = CustomCLIP(**model_cfg, cast_dtype=cast_dtype)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/eva_clip/model.py", line 281, in __init__
        self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype)
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/eva_clip/model.py", line 110, in _build_vision_tower
        visual = EVAVisionTransformer(
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/eva_clip/eva_vit_model.py", line 417, in __init__
        self.blocks = nn.ModuleList([
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/eva_clip/eva_vit_model.py", line 418, in <listcomp>
        Block(
      File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet-evaclip/eva_clip/eva_vit_model.py", line 253, in __init__
        self.norm1 = norm_layer(dim)
      File "/root/miniconda3/lib/python3.10/site-packages/apex/normalization/fused_layer_norm.py", line 294, in __init__
        fused_layer_norm_cuda = importlib.import_module("fused_layer_norm_cuda")
      File "/root/miniconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
      File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
    ModuleNotFoundError: No module named 'fused_layer_norm_cuda'