LykosAI / StabilityMatrix

Multi-Platform Package Manager for Stable Diffusion
https://lykos.ai
GNU Affero General Public License v3.0
4.66k stars 296 forks source link

Face Detailer is not working as intended. #918

Open KloneBaker opened 1 month ago

KloneBaker commented 1 month ago

What happened?

When enabling Face Detailer, If I'm lucky, on the first pass it works. But if I change any of the values in face detailer. It still generates the same exact image.

Here are the solutions I've tried with no luck: (basically it doesn't take changes into account, just outputs the last result.)

  1. Turning the module on and off.
  2. Removing and re-adding the module and tweaking it from the start.
  3. fresh installation of Stability Matrix didn't fix it either.

And to make the matters worse, sometimes it doesn't even work on the first run.

p.s. I've managed to make a ComfyUI workflow that is using SDXL/Pony models + same face detailer settings. And it outputs beautifully with no hiccup or crash.

System information and specs: OS: Windows 11 [23H2] [22631.3085] GPU: RTX 3070 8GB GPU Driver version: 560.70 Game Ready Driver.

Steps to reproduce

  1. Open Stability Matrix.
  2. Do everything as you're doing normally to a SDXL or Pony Model.
  3. Add Face Detailer Node/Module and Start Tweaking it.
  4. Hit Generate Button. 5A. If you're lucky it outputs, next changes to face detailer won't be taken into account. And output will stay the same. 5B. If you're unlucky, it bypasses the Face detailer's node/module entirely.

Relevant logs

D:\Stability Matrix\Data\Packages\ComfyUI\venv\lib\site-packages\ultralytics\nn\tasks.py:833: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  ckpt = torch.load(file, map_location="cpu")
D:\Stability Matrix\Data\Packages\ComfyUI\venv\lib\site-packages\segment_anything\build_sam.py:105: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  state_dict = torch.load(f)
Loads SAM model: D:\Stability Matrix\Data\Models\Sams\sam_vit_b_01ec64.pth (device:AUTO)
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
D:\Stability Matrix\Data\Packages\ComfyUI\venv\lib\site-packages\transformers\tokenization_utils_base.py:1617: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be deprecated in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
loaded completely 0.0 4897.0483474731445 True
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
D:\Stability Matrix\Data\Packages\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Unloading models for lowram load.
0 models unloaded.
loaded partially 4890.335068511963 4890.333869934082 0
100%|██████████| 20/20 [00:07<00:00,  2.67it/s]
Requested to load AutoencoderKL
Loading 1 new model
loaded completely 0.0 159.55708122253418 True

0: 640x480 1 face, 84.0ms
Speed: 15.2ms preprocess, 84.0ms inference, 98.1ms postprocess per image at shape (1, 3, 640, 480)
CLIP: [None]
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
Detailer: segment skip (enough big)
Prompt executed in 36.39 seconds

Version

2.12.0

What Operating System are you using?

Windows

github-actions[bot] commented 20 hours ago

This issue is stale because it has been open 30 days with no activity. Remove the stale label or comment, else this will be closed in 5 days.