huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
24.28k stars 5.02k forks source link

training example for instruct pix2pix doesn't zero out embeds #7920

Open bghira opened 2 months ago

bghira commented 2 months ago

Describe the bug

When running inference on SDXL, the config specifies to zero out the embedding when the prompt is empty.

Reproduction

    # Get null conditioning
    def compute_null_conditioning():
        null_conditioning_list = []
        for a_tokenizer, a_text_encoder in zip(tokenizers, text_encoders):
            null_conditioning_list.append(
                a_text_encoder(
                    tokenize_captions([""], tokenizer=a_tokenizer).to(accelerator.device),
                    output_hidden_states=True,
                ).hidden_states[-2]
            )
        return torch.concat(null_conditioning_list, dim=-1)

    null_conditioning = compute_null_conditioning()

this could likely be replaced with a probabilistic call to torch.zeros_like() inside the training loop instead.

I've checked the values of the embeds, and classifier-free guidance at inference time definitely makes use of the zero embed and not just "", which end up producing very different results.

other models though like deepfloyd just use "" from eg. T5 and behave rather differently.

Logs

No response

System Info

N/A

Who can help?

@sayakpaul

sayakpaul commented 2 months ago

Thanks for bringing this up. Possible for you to show a comparison between what happens when you zero out like the way you mentioned compared to the existing approach?

I've checked the values of the embeds, and classifier-free guidance at inference time definitely makes use of the zero embed and not just "", which end up producing very different results.

The SD IP2P pipeline uses "", though when negative prompt is not provided: https://github.com/huggingface/diffusers/blob/ec9e88139a09b7410df6aa98fb186d8118002c48/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py#L558

However, it makes use to zeros_like for the unconditional image embeddings: https://github.com/huggingface/diffusers/blob/ec9e88139a09b7410df6aa98fb186d8118002c48/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py#L869

bghira commented 2 months ago

the base model was trained using it, so i figured aligning with the base model's training and inference has better results.

from my own tests, i can now reduce the step count required when running the default config on the SDXL pipelines, eg. force zeroes is set to True

I also have much better learning. this model started from ptx0/terminus-xl-velocity-v1 and it was unable to spell.

1000 steps of tuning later:

image

the base model was trained using "" and it never really ends up with better CFG performance... but now it does!

bghira commented 2 months ago

see the base SDXL pipeline:

        # get unconditional embeddings for classifier free guidance
        zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
        if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
            negative_prompt_embeds = torch.zeros_like(prompt_embeds)
            negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)

and the config:

{
   "_class_name": "StableDiffusionXLPipeline",
   "_diffusers_version": "0.19.0.dev0",
   "force_zeros_for_empty_prompt": true
}
sayakpaul commented 2 months ago

Ah makes a ton of sense. Do you want to take a step at opening a PR to fix this?

bghira commented 2 months ago

can i also open the pull request for all of the other training examples, to add general dropout capabilities to them?

sayakpaul commented 2 months ago

We can open that up for the community. This way everyone gets to participate.

bghira commented 2 months ago

like the ticket for updating the fp16 error? #6231

sayakpaul commented 2 months ago

https://github.com/huggingface/diffusers/issues/6552