WASasquatch / was-node-suite-comfyui

An extensive node suite for ComfyUI with over 210 new nodes
MIT License
1.15k stars 170 forks source link

function bounded_image_blend_with_mask is missing from WAS_Node_Suite.py #234

Open aqgasa opened 11 months ago

aqgasa commented 11 months ago

File: WAS_Node_Suite.py

"class WAS_Bounded_Image_Blend_With_Mask" (line 11396) is missing the function "bounded_image_blend_with_mask".

Instead the class has a function "bounded_image_crop_with_mask" (line 11418)

WASasquatch commented 11 months ago

I was testing those classes in another custom node, trying to fix them. They may not work. Though I patched the name.

aqgasa commented 11 months ago

I replaced the function with this earlier version which seems to work (at least in the workflows I've tried):

def bounded_image_blend_with_mask(self, target, target_mask, target_bounds, source, blend_factor, feathering):
    # Convert PyTorch tensors to PIL images
    target_pil = Image.fromarray((target.squeeze(0).cpu().numpy() * 255).clip(0, 255).astype(np.uint8))
    target_mask_pil = Image.fromarray((target_mask.cpu().numpy() * 255).astype(np.uint8), mode='L')
    source_pil = Image.fromarray((source.squeeze(0).cpu().numpy() * 255).astype(np.uint8))

    # Extract the target bounds
    rmin, rmax, cmin, cmax = target_bounds

    # Create a blank image with the same size and mode as the target
    source_positioned = Image.new(target_pil.mode, target_pil.size)

    # Paste the source image onto the blank image using the target bounds
    source_positioned.paste(source_pil, (cmin, rmin))

    # Create a blend mask using the target mask and blend factor
    blend_mask = target_mask_pil.point(lambda p: p * blend_factor).convert('L')

    # Apply feathering (Gaussian blur) to the blend mask if feather_amount is greater than 0
    if feathering > 0:
        blend_mask = blend_mask.filter(ImageFilter.GaussianBlur(radius=feathering))

        # Blend the source and target images using the blend mask
    result = Image.composite(source_positioned, target_pil, blend_mask)

    # Convert the result back to a PyTorch tensor
    result_tensor = torch.from_numpy(np.array(result).astype(np.float32) / 255).unsqueeze(0)

    return (result_tensor,)