The prepare_ip_adapter_image_embeds function has a bug that results in unintended feature mixing across images during batch processing. This issue causes the generated images to combine features from multiple reference images, instead of maintaining a one-to-one correspondence with each reference.
When using the pipeline in batch mode, I use ip_adapter_image_embeds with a shape of (2*B, N, C) and set num_images_per_prompt=1. I expect the pipeline to generate B images, where each generated image should correspond directly to each reference in ip_adapter_image_embeds (note that 2*B includes the negative image embedding for classifier-free guidance).
However, when processing ip_adapter_image_embeds in the pipeline, the tensor gets duplicated num_images_per_prompt * batch_size = 1 * B times. This leads to the image_embeds tensor having a shape of (B*2*B, N, C) instead of the expected shape of (2*B, N, C) .
In the IPAdapterAttnProcessor2_0 class, the view operation is applied to the input image_embeds tensor. This prevents a shape mismatch error, but it leads to ip_key and ip_value containing mixed features from multiple reference images. As a result, the features of the generated images are a mixture of several reference images instead of having a one-to-one correspondence.
*Although I temporarily resolved the issue by changing the `num_images_per_promptbatch_sizeparameter passed to theprepare_ip_adapter_image_embedsmethod tonum_images_per_prompt`, could this potentially cause issues in other scenarios?**
if ip_adapter_image is not None or ip_adapter_image_embeds is not None:
image_embeds = self.prepare_ip_adapter_image_embeds(
ip_adapter_image,
ip_adapter_image_embeds,
device,
num_images_per_prompt,
self.do_classifier_free_guidance,
)
Reproduction
Here’s a demo script that illustrates the issue. The script loads two reference images (image1 and image2), extracts their embeddings, and uses them as input to the pipeline in batch mode.
import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler
from diffusers.utils import load_image
from insightface.app import FaceAnalysis
import cv2
import numpy as np
pipeline = StableDiffusionPipeline.from_pretrained(
"../checkpoints/Realistic_Vision_V4.0_noVAE", # Replace with your model weights path
torch_dtype=torch.float16,
).to("cuda")
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.load_ip_adapter("../checkpoints/IP-Adapter", subfolder=None,
weight_name="ip-adapter-faceid_sd15.bin", image_encoder_folder=None) #Replace with your model weights path
pipeline.set_ip_adapter_scale(1.0)
#Replace with your model weights path
app = FaceAnalysis(name="/root/data1/IP-Face/checkpoints/insightface", providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) #Replace with your model weights path
app.prepare(ctx_id=0, det_size=(384, 384))
image1 = load_image('../test_image/65.jpg')
image2 = load_image('../test_image/27022.jpg')
face1 = cv2.cvtColor(np.asarray(image1),cv2.COLOR_RGB2BGR)
face1 = app.get(face1)
face1_embedding = torch.from_numpy(face1[0].normed_embedding)
face1_embedding = face1_embedding.reshape(1,1,-1)
face2 = cv2.cvtColor(np.asarray(image2),cv2.COLOR_RGB2BGR)
face2 = app.get(face2)
face2_embedding = torch.from_numpy(face2[0].normed_embedding)
face2_embedding = face2_embedding.reshape(1,1,-1)
ref_face_embedding = torch.cat([face1_embedding,face2_embedding])
neg_ref_face_embedding = torch.zeros_like(ref_face_embedding)
batch_id_embeds = torch.cat([neg_ref_face_embedding, ref_face_embedding]).to(dtype=torch.float16, device="cuda")
batch_size = int(batch_id_embeds.shape[0]/2)
generator = torch.Generator(device="cpu").manual_seed(2023)
images = pipeline(
prompt=["photo of a woman in red dress in a garden"]*batch_size,
ip_adapter_image_embeds=[batch_id_embeds],
negative_prompt=["monochrome, lowres, bad anatomy, worst quality, low quality"]*batch_size,
num_inference_steps=50, num_images_per_prompt=1,
generator=generator
).images
Reference Images
The reference images image1 and image2 used as input embeddings:
image1
image2
Generated Images in Batch Mode
Using the demo code above, the following images were generated. These images exhibit features mixed from both references instead of corresponding uniquely to one.
Generated Image 1
Generated Image 2
Expected Behavior
In single-image processing (non-batch mode), the pipeline works as expected, producing distinct images for each reference:
Thanks for the extremely well-written issue. You seem to already have a handle on how this could be fixed. Would you maybe like to take a stab at opening a PR with the fix?
Describe the bug
The
prepare_ip_adapter_image_embeds
function has a bug that results in unintended feature mixing across images during batch processing. This issue causes the generated images to combine features from multiple reference images, instead of maintaining a one-to-one correspondence with each reference.When using the pipeline in batch mode, I use
ip_adapter_image_embeds
with a shape of(2*B, N, C)
and setnum_images_per_prompt=1
. I expect the pipeline to generateB
images, where each generated image should correspond directly to each reference inip_adapter_image_embeds
(note that2*B
includes the negative image embedding for classifier-free guidance).https://github.com/huggingface/diffusers/blob/31058cdaef63ca660a1a045281d156239fba8192/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L950-L957
https://github.com/huggingface/diffusers/blob/9a92b8177cb3f8bf4b095fff55da3b45a3607960/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L561-L569
However, when processing
ip_adapter_image_embeds
in the pipeline, the tensor gets duplicatednum_images_per_prompt * batch_size = 1 * B
times. This leads to theimage_embeds
tensor having a shape of(B*2*B, N, C)
instead of the expected shape of(2*B, N, C)
.In the
IPAdapterAttnProcessor2_0
class, the view operation is applied to the input image_embeds tensor. This prevents a shape mismatch error, but it leads toip_key
andip_value
containing mixed features from multiple reference images. As a result, the features of the generated images are a mixture of several reference images instead of having a one-to-one correspondence.https://github.com/huggingface/diffusers/blob/9a92b8177cb3f8bf4b095fff55da3b45a3607960/src/diffusers/models/attention_processor.py#L4112-L4122
*Although I temporarily resolved the issue by changing the `num_images_per_promptbatch_size
parameter passed to the
prepare_ip_adapter_image_embedsmethod to
num_images_per_prompt`, could this potentially cause issues in other scenarios?**Reproduction
Here’s a demo script that illustrates the issue. The script loads two reference images (image1 and image2), extracts their embeddings, and uses them as input to the pipeline in batch mode.
Reference Images
Generated Images in Batch Mode
Expected Behavior
Logs
No response
System Info
Who can help?
@asomoza