Bing-su / adetailer

Auto detecting, masking and inpainting with detection model.
GNU Affero General Public License v3.0
4.18k stars 326 forks source link

[Bug]: Inpaint Denoising strength not working correctly in ForgeUI #707

Closed EmotionIce closed 1 month ago

EmotionIce commented 2 months ago

Describe the bug

im using the latest version of forgeUi in flux mode with a flux model but also while using the adetailer extension running with a sdxl model. The issue is that denoising strengths below 0.8 have very little to no effect on the result. In my testing, until a strength of 0.973 it is somewhat working but still rather weakly. At and above 0.974, the denoising seems to start from a full light blue background and the result is unusable.

My setup: 1070 ti (8gb), windows 11, latest forge and latest adetailer version Adetailer Inpaint Settings: image

Result images with different denoising strengths:

Original, as generated by flux image

Adetailer with 0.7 strength -> almost no diff to original image

Adetailer with 0.973 strength -> at least some difference image

Adetailer with 0.974 strength -> blue background from adetailer step image

Steps to reproduce

Screenshots

No response

Console logs, from start to end.

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-504-gc030fed3
Commit hash: c030fed30b8e54c97d04e13309f6872b22f92b8d
Launching Web UI with arguments:
Total VRAM 8192 MB, total RAM 32682 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1070 Ti : native
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
C:\Users\emoice\Desktop\forge\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: C:\Users\emoice\Desktop\forge\webui\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.9.0, num models: 10
2024-09-05 19:19:12,038 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\Stable-diffusion\\nsfwMASTERFLUXLoraMerged_v10.safetensors', 'hash': '1091bf14'}, 'additional_modules': ['C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\VAE\\ae.safetensors', 'C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\text_encoder\\clip_l.safetensors', 'C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\text_encoder\\t5xxl_fp16.safetensors'], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 35.7s (prepare environment: 6.5s, import torch: 15.8s, initialize shared: 0.3s, other imports: 1.2s, load scripts: 5.0s, create ui: 4.3s, gradio launch: 2.4s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 87.50% GPU memory (7167.00 MB) to load weights, and use 12.50% GPU memory (1024.00 MB) to do matrix computation.
Loading Model: {'checkpoint_info': {'filename': 'C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\Stable-diffusion\\nsfwMASTERFLUXLoraMerged_v10.safetensors', 'hash': '1091bf14'}, 'additional_modules': ['C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\VAE\\ae.safetensors', 'C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\text_encoder\\clip_l.safetensors', 'C:\\Users\\emoice\\Desktop\\forge\\webui\\models\\text_encoder\\t5xxl_fp16.safetensors'], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'transformer': 780, 'vae': 244, 'text_encoder': 196, 'text_encoder_2': 220, 'ignore': 0}
Using Default T5 Data Type: torch.float16
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': torch.bfloat16, 'computation_dtype': torch.float32}
Model loaded in 47.3s (unload existing model: 0.2s, forge model load: 47.0s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
All loaded to GPU.
Moving model(s) has taken 0.01 seconds
Distilled CFG Scale: 3.5
[Unload] Trying to free 30534.18 MB for cuda:0 with 0 models keep loaded ... Current free memory is 7167.94 MB ... Done.
[Memory Management] Target: KModel, Free GPU: 7167.94 MB, Model Require: 22700.13 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: -16556.20 MB, CPU Swap Loaded (blocked method): 17838.00 MB, GPU Loaded: 4862.13 MB
Moving model(s) has taken 18.29 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 28/28 [03:24<00:00,  7.32s/it]
[Unload] Trying to free 1439.67 MB for cuda:0 with 0 models keep loaded ... Current free memory is 2270.51 MB ... Done.
[Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 2270.51 MB, Model Require: 319.75 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 926.77 MB, All loaded to GPU.
Moving model(s) has taken 0.24 seconds

0: 640x640 1 face, 6.5ms
Speed: 10.1ms preprocess, 6.5ms inference, 163.2ms postprocess per image at shape (1, 3, 640, 640)
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 1925.50 MB ... Done.
Distilled CFG Scale: 3.5
Distilled CFG Scale: 3.5
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 1927.65 MB ... Done.
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [05:48<00:00,  9.41s/it]
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 1927.52 MB ... Done.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 28/28 [09:28<00:00, 20.31s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 28/28 [09:28<00:00, 6,23s/it]

List of installed extensions

Only have ADetailer installed

dosubot[bot] commented 2 months ago

Hey @EmotionIce! I'm here to assist you with any bugs, questions, or contributions. Let's squash this bug together! 🤖

There are known issues with the denoising strength parameter in the ADetailer extension, particularly when used with ForgeUI in Flux mode. The dynamic denoise strength adjustment logic can cause the denoise strength to fall below the expected threshold, leading to incorrect behavior. This is due to the get_dynamic_denoise_strength function, which modifies the denoise strength based on the bounding box size and a power scaling factor (ad_dynamic_denoise_power). If this power scaling factor is set to a value that significantly alters the denoise strength, it can cause the denoise strength to fall below 0.8, resulting in minimal effect on the output [1].

Here is the relevant code snippet:

@staticmethod
def get_dynamic_denoise_strength(
    denoise_strength: float, bbox: Sequence[Any], image_size: tuple[int, int]
):
    denoise_power = opts.data.get("ad_dynamic_denoise_power", 0)
    if denoise_power == 0:
        return denoise_strength

    modified_strength = dynamic_denoise_strength(
        denoise_power=denoise_power,
        denoise_strength=denoise_strength,
        bbox=bbox,
        image_size=image_size,
    )

    print(
        f"[-] ADetailer: dynamic denoising -- {denoise_strength:.2f} -> {modified_strength:.2f}"
    )

    return modified_strength

To address this issue, you can try the following steps:

  1. Adjust the ad_dynamic_denoise_power Setting: Ensure that the ad_dynamic_denoise_power is set to a value within the recommended range (2-4). This can help in achieving a more effective denoise strength.

  2. Check Dependencies and Compatibility: Verify that all dependencies for ADetailer are correctly installed and that the version of ADetailer you're using is compatible with your version of ForgeUI and Flux mode.

  3. Review Configuration Settings: Double-check the specific configurations for the inpaint feature within ADetailer. Settings such as "Inpaint mask blur", "Inpaint denoising strength", and "Inpaint only masked" are crucial for the correct operation of the inpainting process [2].

If these steps do not resolve the issue, you might consider reverting to an older version of ADetailer, as some users have found success with this approach [3].

To continue talking to Dosu, mention @dosu.

derpina-ai commented 1 week ago

how did you solve this one? It's extremely annoying, and without hacking my adetailer code I don't see a way to fix it