Bing-su / adetailer

Auto detecting, masking and inpainting with detection model.
GNU Affero General Public License v3.0
3.92k stars 305 forks source link

[Bug]: ADetailer crashes because of a NoneType error #653

Open DiegoRRR opened 3 days ago

DiegoRRR commented 3 days ago

Describe the bug

I just installed ADetailer and let it download its file. ADetailer crashes because of a NoneType error.

Steps to reproduce

Screenshots

No response

Console logs, from start to end.

Python 3.10.14 (main, Mar 20 2024, 16:10:51) [MSC v.1939 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --xformers --medvram-sdxl
Arg --medvram-sdxl is removed in Forge.
Now memory management is fully automatic and you do not need any command flags.
Please just remove this flag.
In extreme cases, if you want to force previous lowvram/medvram behaviors, pleas
e use --always-offload-from-vram
Total VRAM 12288 MB, total RAM 32677 MB
WARNING:xformers:A matching Triton is not available, some optimizations will not
 be enabled.
Error caught was: No module named 'triton'
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : native
Hint: your device supports --pin-shared-memory for potential speed improvements.

Hint: your device supports --cuda-malloc for potential speed improvements.
Hint: your device supports --cuda-stream for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated:  False
D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\torchvision\io\im
age.py:13: UserWarning: Failed to load image Python extension: '[WinError 127] L
a procédure spécifiée est introuvable'If you don't plan on using image functiona
lity from `torchvision.io`, you can ignore this warning. Otherwise, there might
be something wrong with your environment. Did you have `libjpeg` or `libpng` ins
talled before building `torchvision` from source?
  warn(
Using xformers cross attention
==============================================================================
You are running torch 2.0.1+cu118.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
================================================================================
=
You are running xformers 0.0.20.
The program is tested to work with xformers 0.0.23.post1.
To reinstall the desired version, run with commandline flag --reinstall-xformers
.

Use --skip-version-check commandline argument to disable this check.
================================================================================
=
ControlNet preprocessor location: D:\apps\stable-diffusion\Forge\webui\models\Co
ntrolNetPreprocessor
[-] ADetailer initialized. version: 24.6.0, num models: 10
Loading weights [d4aedfb248] from D:\apps\stable-diffusion\Forge\webui\models\St
able-diffusion\cyberrealisticXL_v21.safetensors
2024-07-02 16:06:51,529 - ControlNet - INFO - ControlNet UI callback registered.

model_type EPS
UNet ADM Dimension 2816
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 30.7s (prepare environment: 4.1s, import torch: 8.3s, import gradi
o: 1.7s, setup paths: 1.5s, initialize shared: 0.2s, other imports: 0.9s, list S
D models: 0.3s, load scripts: 10.5s, create ui: 2.8s, gradio launch: 0.3s).
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.trans
former.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale
'}
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  11541.73046875
[Memory Management] Model Memory (MB) =  2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  8373.375770568848
Moving model(s) has taken 0.64 seconds
Model loaded in 13.8s (load weights from disk: 0.3s, forge instantiate config: 1
.7s, forge load real models: 8.4s, calculate empty prompt: 3.4s).
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  8979.02490234375
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  3057.938407897949
Moving model(s) has taken 1.75 seconds
100%|██████████████████████████████████████████| 20/20 [00:06<00:00,  2.91it/s]
To load target model AutoencoderKL█████████████| 20/20 [00:04<00:00,  4.08it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  3649.71142578125
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  2466.154344558716
Moving model(s) has taken 0.05 seconds
Cleanup minimal inference memory.
Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge\webui\modules_forge\main_thread.py", line
 37, in loop
    task.work()
  File "D:\apps\stable-diffusion\Forge\webui\modules_forge\main_thread.py", line
 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\apps\stable-diffusion\Forge\webui\modules\txt2img.py", line 111, in t
xt2img_function
    processed = processing.process_images(p)
  File "D:\apps\stable-diffusion\Forge\webui\modules\processing.py", line 752, i
n process_images
    res = process_images_inner(p)
  File "D:\apps\stable-diffusion\Forge\webui\modules\processing.py", line 976, i
n process_images_inner
    x_sample = modules.face_restoration.restore_faces(x_sample)
  File "D:\apps\stable-diffusion\Forge\webui\modules\face_restoration.py", line
19, in restore_faces
    return face_restorer.restore(np_image)
  File "D:\apps\stable-diffusion\Forge\webui\modules\codeformer_model.py", line
55, in restore
    return self.restore_with_helper(np_image, restore_face)
  File "D:\apps\stable-diffusion\Forge\webui\modules\face_restoration_utils.py",
 line 159, in restore_with_helper
    return restore_with_face_helper(np_image, self.face_helper, restore_face)
  File "D:\apps\stable-diffusion\Forge\webui\modules\face_restoration_utils.py",
 line 77, in restore_with_face_helper
    face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dis
t_threshold=5)
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\facexlib\
utils\face_restoration_helper.py", line 139, in get_face_landmarks_5
    bboxes = self.face_det.detect_faces(input_img, 0.97) * scale
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\facexlib\
detection\retinaface.py", line 227, in detect_faces
    keep = py_cpu_nms(bounding_boxes, nms_threshold)
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\facexlib\
detection\retinaface_utils.py", line 41, in py_cpu_nms
    keep = torchvision.ops.nms(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\torchvisi
on\ops\boxes.py", line 40, in nms
    _assert_has_ops()
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\torchvisi
on\extension.py", line 46, in _assert_has_ops
    raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and
torchvision versions are incompatible, or if you had errors while compiling torc
hvision from source. For further information on the compatible versions, check h
ttps://github.com/pytorch/vision#installation for the compatibility matrix. Plea
se check your PyTorch version with torch.__version__ and your torchvision versio
n with torchvision.__version__ and verify if they are compatible, and if not ple
ase reinstall torchvision so that it matches your PyTorch install.
Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision ve
rsions are incompatible, or if you had errors while compiling torchvision from s
ource. For further information on the compatible versions, check https://github.
com/pytorch/vision#installation for the compatibility matrix. Please check your
PyTorch version with torch.__version__ and your torchvision version with torchvi
sion.__version__ and verify if they are compatible, and if not please reinstall
torchvision so that it matches your PyTorch install.
*** Error completing request
*** Arguments: ('task(wbzo9vueoxsjm2m)', <gradio.routes.Request object at 0x0000
0001DB63E5C0>, 'girl', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False,
0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', []
, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, True, True, False, {'ad_mode
l': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt
': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, '
ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset':
0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_
denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_maske
d_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, '
ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scal
e': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use
same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler
': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler',
 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip':
 False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'Non
e', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_gu
idance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': '
None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negat
ive_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_rati
o': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_er
ode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_streng
th': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32,
'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_heigh
t': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_c
fg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint'
, 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sa
mpler': 'DPM++ 2M Karras', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_m
ultiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip
_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controln
et_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0
, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, '(SDXL) Only Generate T
ransparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resiz
e', False, '', '', '', ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, u
se_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_ga
llery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_optio
n='Both', enabled=False, module='None', model='None', weight=1, image=None, resi
ze_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, gui
dance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', sav
e_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, us
e_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gal
lery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option
='Both', enabled=False, module='None', model='None', weight=1, image=None, resiz
e_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guid
ance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save
_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use
_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gall
ery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option=
'Both', enabled=False, module='None', model='None', weight=1, image=None, resize
_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guida
nce_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_
detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEA
N', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, Fal
se, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0
, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard
_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False,
False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1
, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False,
0, False) {}
    Traceback (most recent call last):
      File "D:\apps\stable-diffusion\Forge\webui\modules\call_queue.py", line 57
, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

List of installed extensions

LDSR built-in None
Lora built-in None
ScuNET built-in None
SwinIR built-in None
canvas-zoom-and-pan built-in None
extra-options-section built-in None
forge_legacy_preprocessors built-in None
forge_preprocessor_inpaint built-in None
forge_preprocessor_marigold built-in None
forge_preprocessor_normalbae built-in None
forge_preprocessor_recolor built-in None
forge_preprocessor_reference built-in None
forge_preprocessor_revision built-in None
forge_preprocessor_tile built-in None
mobile built-in None
prompt-bracket-checker built-in None
sd_forge_controlllite built-in None
sd_forge_controlnet built-in None
sd_forge_controlnet_example built-in None
sd_forge_dynamic_thresholding built-in None
sd_forge_fooocus_inpaint built-in None
sd_forge_freeu built-in None
sd_forge_hypertile built-in None
sd_forge_ipadapter built-in None
sd_forge_kohya_hrfix built-in None
sd_forge_latent_modifier built-in None
sd_forge_multidiffusion built-in None
sd_forge_neveroom built-in None
sd_forge_photomaker built-in None
sd_forge_sag built-in None
sd_forge_stylealign built-in None
sd_forge_svd built-in None
sd_forge_z123 built-in None
soft-inpainting built-in None
adetailer https://github.com/Bing-su/adetailer main a7d96113 2024-06-16 02:32:49 unknown sd-forge-layerdiffuse https://github.com/layerdiffusion/sd-forge-layerdiffuse main e4d5060e 2024-03-08 15:36:24 unknown

dosubot[bot] commented 3 days ago

@DiegoRRR ¡Hola! Soy Dosu, un bot que puede ayudarte a solucionar errores, responder preguntas y convertirte en un colaborador. Estoy aquí para ayudarte mientras esperas a un mantenedor humano.

The NoneType error in ADetailer is likely caused by the mask parameter being None in the StableDiffusionProcessingImg2Img initialization within the get_i2i_p method. This can happen if the mask is not properly set or generated.

To fix this issue, ensure that the mask is correctly created and passed to the StableDiffusionProcessingImg2Img object. You can modify the get_i2i_p method to include a valid mask. Here is a possible solution:

  1. Ensure the mask is created using the create_mask_from_bbox method.
  2. Pass the created mask to the StableDiffusionProcessingImg2Img initialization.

Here is the modified get_i2i_p method:

def get_i2i_p(self, p, args: ADetailerArgs, image):
    seed, subseed = self.get_seed(p)
    width, height = self.get_width_height(p, args)
    steps = self.get_steps(p, args)
    cfg_scale = self.get_cfg_scale(p, args)
    initial_noise_multiplier = self.get_initial_noise_multiplier(p, args)
    sampler_name = self.get_sampler(p, args)
    override_settings = self.get_override_settings(p, args)

    version_args = {}
    if schedulers:
        version_args.update(self.get_scheduler(p, args))

    # Create the mask
    bboxes = self.get_bounding_boxes(p)  # Assuming this method exists to get bounding boxes
    mask = create_mask_from_bbox(bboxes, (width, height))

    i2i = StableDiffusionProcessingImg2Img(
        init_images=[image],
        resize_mode=0,
        denoising_strength=args.ad_denoising_strength,
        mask=mask,  # Pass the created mask here
        mask_blur=args.ad_mask_blur,
        inpainting_fill=1,
        inpaint_full_res=args.ad_inpaint_only_masked,
        inpaint_full_res_padding=args.ad_inpaint_only_masked_padding,
        inpainting_mask_invert=0,
        initial_noise_multiplier=initial_noise_multiplier,
        sd_model=p.sd_model,
        outpath_samples=p.outpath_samples,
        outpath_grids=p.outpath_grids,
        prompt="",  # replace later
        negative_prompt="",
        styles=p.styles,
        seed=seed,
        subseed=subseed,
        subseed_strength=p.subseed_strength,
        seed_resize_from_h=p.seed_resize_from_h,
        seed_resize_from_w=p.seed_resize_from_w,
        sampler_name=sampler_name,
        batch_size=1,
        n_iter=1,
        steps=steps,
        cfg_scale=cfg_scale,
        width=width,
        height=height,
        restore_faces=args.ad_restore_face,
        tiling=p.tiling,
        extra_generation_params=copy_extra_params(p.extra_generation_params),
        do_not_save_samples=True,
        do_not_save_grid=True,
        override_settings=override_settings,
        **version_args,
    )

    i2i.cached_c = [None, None]
    i2i.cached_uc = [None, None]
    i2i.scripts, i2i.script_args = self.script_filter(p, args)
    i2i._ad_disabled = True
    i2i._ad_inner = True

    if args.ad_controlnet_model != "Passthrough" and controlnet_type != "forge":
        self.disable_controlnet_units(i2i.script_args)

    if args.ad_controlnet_model not in ["None", "Passthrough"]:
        self.update_controlnet_args(i2i, args)
    elif args.ad_controlnet_model == "None":
        i2i.control_net_enabled = False

    return i2i

This modification ensures that the mask parameter is not None, which should prevent the NoneType error. The create_mask_from_bbox function from the adetailer/common.py file is used to generate the mask from bounding boxes [1][2].

To continue talking to Dosu, mention @dosu.

DiegoRRR commented 3 days ago

@dosu : Thanks. Ok, I edited the file "extensions\adetailer\scripts!adetailer.py" and replaced the definition of the get_i2i_p method. Now I have this error:

Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ro
utes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1429, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1239, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\co
mponents\image.py", line 273, in preprocess
    assert isinstance(x, str)
AssertionError
Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ro
utes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\to_
thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\_ba
ckends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\_ba
ckends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ut
ils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\apps\stable-diffusion\Forge\webui\extensions-builtin\sd_forge_control
net\lib_controlnet\controlnet_ui\preset.py", line 257, in update_reset_button
    infotext = ControlNetPresetUI.presets[preset_name]
KeyError: None
Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ro
utes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\to_
thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\_ba
ckends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\_ba
ckends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ut
ils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\apps\stable-diffusion\Forge\webui\extensions-builtin\sd_forge_control
net\lib_controlnet\controlnet_ui\controlnet_ui_group.py", line 1122, in <lambda>

    fn=lambda x: gr.update(value=x + 1),
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ro
utes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1077, in call_function
    assert block_fn.fn, f"function with index {fn_index} not defined."
AssertionError: function with index 593 not defined.
Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ro
utes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\to_
thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\_ba
ckends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\anyio\_ba
ckends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ut
ils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\apps\stable-diffusion\Forge\webui\extensions-builtin\sd_forge_control
net\lib_controlnet\controlnet_ui\preset.py", line 257, in update_reset_button
    infotext = ControlNetPresetUI.presets[preset_name]
KeyError: None
Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\ro
utes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\apps\stable-diffusion\Forge\system\python\lib\site-packages\gradio\bl
ocks.py", line 1077, in call_function
    assert block_fn.fn, f"function with index {fn_index} not defined."
AssertionError: function with index 674 not defined.
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  8994.23583984375
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  3073.149345397949
Moving model(s) has taken 1.78 seconds
100%|██████████████████████████████████████████| 20/20 [00:06<00:00,  3.04it/s]
To load target model AutoencoderKL█████████████| 20/20 [00:04<00:00,  3.96it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  3665.79736328125
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  2482.240282058716
Moving model(s) has taken 0.06 seconds
Total progress: 100%|██████████████████████████| 20/20 [00:05<00:00,  3.53it/s]
100%|██████████████████████████████████████████| 20/20 [00:05<00:00,  3.98it/s]
dosubot[bot] commented 3 days ago

The new error you're encountering seems to be related to the mask parameter and possibly other variables not being properly defined or passed. Let's ensure that all necessary variables and modules are properly defined and imported.

Here is the corrected get_i2i_p method with the necessary imports and definitions:

from some_module import schedulers, controlnet_type  # Ensure these are correctly imported from the appropriate module

def get_i2i_p(self, p, args: ADetailerArgs, image):
    seed, subseed = self.get_seed(p)
    width, height = self.get_width_height(p, args)
    steps = self.get_steps(p, args)
    cfg_scale = self.get_cfg_scale(p, args)
    initial_noise_multiplier = self.get_initial_noise_multiplier(p, args)
    sampler_name = self.get_sampler(p, args)
    override_settings = self.get_override_settings(p, args)

    version_args = {}
    if schedulers:
        version_args.update(self.get_scheduler(p, args))

    # Create the mask
    bboxes = self.get_bounding_boxes(p)  # Assuming this method exists to get bounding boxes
    mask = create_mask_from_bbox(bboxes, (width, height))

    i2i = StableDiffusionProcessingImg2Img(
        init_images=[image],
        resize_mode=0,
        denoising_strength=args.ad_denoising_strength,
        mask=mask,  # Pass the created mask here
        mask_blur=args.ad_mask_blur,
        inpainting_fill=1,
        inpaint_full_res=args.ad_inpaint_only_masked,
        inpaint_full_res_padding=args.ad_inpaint_only_masked_padding,
        inpainting_mask_invert=0,
        initial_noise_multiplier=initial_noise_multiplier,
        sd_model=p.sd_model,
        outpath_samples=p.outpath_samples,
        outpath_grids=p.outpath_grids,
        prompt="",  # replace later
        negative_prompt="",
        styles=p.styles,
        seed=seed,
        subseed=subseed,
        subseed_strength=p.subseed_strength,
        seed_resize_from_h=p.seed_resize_from_h,
        seed_resize_from_w=p.seed_resize_from_w,
        sampler_name=sampler_name,
        batch_size=1,
        n_iter=1,
        steps=steps,
        cfg_scale=cfg_scale,
        width=width,
        height=height,
        restore_faces=args.ad_restore_face,
        tiling=p.tiling,
        extra_generation_params=copy_extra_params(p.extra_generation_params),
        do_not_save_samples=True,
        do_not_save_grid=True,
        override_settings=override_settings,
        **version_args,
    )

    i2i.cached_c = [None, None]
    i2i.cached_uc = [None, None]
    i2i.scripts, i2i.script_args = self.script_filter(p, args)
    i2i._ad_disabled = True
    i2i._ad_inner = True

    if args.ad_controlnet_model != "Passthrough" and controlnet_type != "forge":
        self.disable_controlnet_units(i2i.script_args)

    if args.ad_controlnet_model not in ["None", "Passthrough"]:
        self.update_controlnet_args(i2i, args)
    elif args.ad_controlnet_model == "None":
        i2i.control_net_enabled = False

    return i2i

Ensure that schedulers and controlnet_type are correctly imported or defined in your script. If they are part of another module, replace some_module with the actual module name [1].

To continue talking to Dosu, mention @dosu.

DiegoRRR commented 2 days ago

@dosu : I replaced the get_i2i_p method.

I don't know python but I think schedulers is defined in from aaaaaa.conditional import create_binary_mask, schedulers and controlnet_type is defined in from controlnet_ext import ( CNHijackRestore, ControlNetExt, cn_allow_script_control, controlnet_exists, controlnet_type, get_cn_models, )

There is no error anymore but it does nothing. I check the ADetailer checkbox. "face_yolov8n.pt" is selected. I leave all settings to default, and I generate. But the picture is exactly the same as without ADetailer, no improvement on the generated face.

Bing-su commented 2 days ago

The error you really should focus on is not the NoneType error, but rather the following:

RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and
torchvision versions are incompatible, or if you had errors while compiling torc
hvision from source. For further information on the compatible versions, check h
ttps://github.com/pytorch/vision#installation for the compatibility matrix. Plea
se check your PyTorch version with torch.__version__ and your torchvision versio
n with torchvision.__version__ and verify if they are compatible, and if not ple
ase reinstall torchvision so that it matches your PyTorch install.

This shows that there was a problem with the package installation. Try a clean reinstall.