tencent-ailab / IP-Adapter

The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
Apache License 2.0
4.46k stars 289 forks source link

person's skin is blue #368

Closed guoti777 closed 1 month ago

guoti777 commented 1 month ago

IP-Adapter-FaceID ip-adapter-faceid; base model: Realistic_Vision_V4.0_noVAE; vae: sd-vae-ft-mse; 1M images, 10w iters.

aycaecemgul commented 1 month ago

please add a reproducible code, probably there is a mistake about rgb conversion?

guoti777 commented 1 month ago

please add a reproducible code, probably there is a mistake about rgb conversion?

thanks! Here is the transform code:

  1. image:
        self.transform = transforms.Compose([
            transforms.Resize(self.size, interpolation=transforms.InterpolationMode.BILINEAR),
            transforms.CenterCrop(self.size),
            transforms.ToTensor(),
            # transforms.Normalize([-1], [1]),
            transforms.Normalize([0.5], [0.5]),
            # transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
        ])
        raw_image = Image.open(os.path.join(self.image_root_path, image_file))
        image = self.transform(raw_image.convert("RGB"))

    2: faceid_embeds:

                image = cv2.imread(img_path)
                faces = app.get(image)
                if len(faces) == 1:
                    faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
                    torch.save(faceid_embeds, bin_path)
  2. encode:
    images = torch.stack([example["image"] for example in data])
    face_id_embed = torch.stack([example["face_id_embed"] for example in data])
                    latents = vae.encode(batch["images"].to(accelerator.device, dtype=weight_dtype)).latent_dist.sample()
                    latents = latents * vae.config.scaling_factor
                image_embeds = batch["face_id_embed"].to(accelerator.device, dtype=weight_dtype)
                with torch.no_grad():
                    encoder_hidden_states = text_encoder(batch["text_input_ids"].to(accelerator.device))[0]
                noise_pred = ip_adapter(noisy_latents, timesteps, encoder_hidden_states, image_embeds)
alexblattner commented 1 month ago

it has to do with the convertion between cv2 and PIL. the color mapping is different