Closed guoti777 closed 1 month ago
ip-adapter-faceid; base model: Realistic_Vision_V4.0_noVAE; vae: sd-vae-ft-mse; 1M images, 10w iters.
please add a reproducible code, probably there is a mistake about rgb conversion?
please add a reproducible code, probably there is a mistake about rgb conversion?
thanks! Here is the transform code:
self.transform = transforms.Compose([
transforms.Resize(self.size, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(self.size),
transforms.ToTensor(),
# transforms.Normalize([-1], [1]),
transforms.Normalize([0.5], [0.5]),
# transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
])
raw_image = Image.open(os.path.join(self.image_root_path, image_file))
image = self.transform(raw_image.convert("RGB"))
2: faceid_embeds:
image = cv2.imread(img_path)
faces = app.get(image)
if len(faces) == 1:
faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
torch.save(faceid_embeds, bin_path)
images = torch.stack([example["image"] for example in data])
face_id_embed = torch.stack([example["face_id_embed"] for example in data])
latents = vae.encode(batch["images"].to(accelerator.device, dtype=weight_dtype)).latent_dist.sample()
latents = latents * vae.config.scaling_factor
image_embeds = batch["face_id_embed"].to(accelerator.device, dtype=weight_dtype)
with torch.no_grad():
encoder_hidden_states = text_encoder(batch["text_input_ids"].to(accelerator.device))[0]
noise_pred = ip_adapter(noisy_latents, timesteps, encoder_hidden_states, image_embeds)
it has to do with the convertion between cv2 and PIL. the color mapping is different