laksjdjf / IPAdapter-ComfyUI

experimental
GNU General Public License v3.0
217 stars 13 forks source link

Mask input causes error #35

Open MoonMoon82 opened 1 year ago

MoonMoon82 commented 1 year ago

Hi! When I try to add a mask like in the example you showed, this error appears during sampler node running:

Error occurred when executing KSampler:

float division by zero

  File "C:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 107, in animatediff_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 97, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 785, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 690, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 630, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
  File "C:\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "C:\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "C:\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 289, in sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 265, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "C:\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 627, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 56, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "C:\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 695, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "C:\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 525, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint
    return func(*inputs)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 624, in _forward
    n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\IPAdapter-ComfyUI\ip_adapter.py", line 295, in __call__
    mask_downsample = torch.nn.functional.interpolate(mask.unsqueeze(0).unsqueeze(0), scale_factor= 1/8/down_sample_rate, mode="nearest").squeeze(0)

The error appeared some ComfyUI updates ago.

laksjdjf commented 1 year ago

maybe fixed it https://github.com/laksjdjf/IPAdapter-ComfyUI/pull/36

artmamedov commented 1 year ago

Hi this didn't fix it, still has the same problem

laksjdjf commented 1 year ago

Is the resolution of the mask the same as the image?

artmamedov commented 1 year ago

@laksjdjf I think when I do the print outs, the problem is that mask.shape[0] and mask.shape[1] are both 1's, so the mask_size = 1 and the downsample rate becomes 0. Here is what I have:

DOWNSAMPLE RATE 0 MASK SHAPE torch.Size([1, 1, 1023, 1023]) IP OUT SHAPE torch.Size([2, 16129, 320]) inpaint_ipadapter.json

Note that the mask works perfectly fine with the inpainting controlnet. The IPAdapter also works well when the mask isn't included. The problem occurs when the mask is added into the IPAdapter.

laksjdjf commented 1 year ago

Where did you insert the print statement?

Is it above this? https://github.com/laksjdjf/IPAdapter-ComfyUI/blob/1127a2dc95d9df646bc7fdf458bdb57d671d7118/ip_adapter.py#L300

artmamedov commented 1 year ago

Yeah. The division by 0 comes from the down_sample_rate = 0, which comes from mask_size = 1 since mask.shape[0] = 1 and mask.shape[1] = 1, so when you do

down_sample_rate = int((mask_size // 64 // out.shape[1]) ** (1/2)) you're dividing by 0.

MoonMoon82 commented 1 year ago

I can confirm, the issue I was initially refering to was fixed. But the error message still appears if I try to do a second (hires) cycle, because the mask resolution does not match to the latent resolution anymore. It would be aweseome if this could be solved by automatically stretch scaling the mask to the latent image resolution! (?)

laksjdjf commented 1 year ago

@laksjdjf I think when I do the print outs, the problem is that mask.shape[0] and mask.shape[1] are both 1's, so the mask_size = 1 and the downsample rate becomes 0. Here is what I have:

DOWNSAMPLE RATE 0 MASK SHAPE torch.Size([1, 1, 1023, 1023]) IP OUT SHAPE torch.Size([2, 16129, 320]) inpaint_ipadapter.json

Note that the mask works perfectly fine with the inpainting controlnet. The IPAdapter also works well when the mask isn't included. The problem occurs when the mask is added into the IPAdapter.

I ran this workflow and, strangely, no errors occurred. Adding mask.squeeze() may solve the problem, but another error occurs for resolutions that are not divisible by 8.

laksjdjf commented 1 year ago

I can confirm, the issue I was initially refering to was fixed. But the error message still appears if I try to do a second (hires) cycle, because the mask resolution does not match to the latent resolution anymore. It would be aweseome if this could be solved by automatically stretch scaling the mask to the latent image resolution! (?)

I have a solution to this problem and will update it, as a recent update to Comfy allows the aspect ratio of the image to be known.

artmamedov commented 1 year ago

Hi, was this ever pushed and solved?

laksjdjf commented 1 year ago

Some specification changes have been made to the mask. It should be able to accept masks of any resolution.