huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.2k stars 5.21k forks source link

VQ-Diffusion #319

Closed patrickvonplaten closed 1 year ago

patrickvonplaten commented 2 years ago

Model/Pipeline/Scheduler description

VQ-Diffusion is based on a VQ-VAE whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). It produces significantly better text-to-image generation results when compared with Autoregressive models with similar numbers of parameters. Compared with previous GAN-based methods, VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin.

https://github.com/microsoft/VQ-Diffusion

Open source status

Provide useful links for the implementation

VQ-Diffusion would be a super cool addition to diffusers. cc @cientgu and @zzctan .

Also cc @patil-suraj here

unography commented 2 years ago

Hi @patrickvonplaten, would love to take this up!

patrickvonplaten commented 2 years ago

This would be great! Let me know if you need any help :-) To begin with I think we should try to get it running with the original codebase and then port the code to diffusers

patil-suraj commented 2 years ago

Hey @unography awesome! Happy to help here if you have any questions.

patrickvonplaten commented 2 years ago

Any progress here @unography ? Do you already have an open PR :-) Otherwise let's maybe open it again to the community

345ishaan commented 2 years ago

Hi, I will be happy to contribute / collaborate on this :)

unography commented 2 years ago

Hi @patrickvonplaten, unfortunately, I've been unable to spend time on this right now due to some other commitments, we can open this up again to the community

patrickvonplaten commented 1 year ago

No worries! @345ishaan would you be interested in giving it a go?

345ishaan commented 1 year ago

@patrickvonplaten Yes, happy to start with this. Do you have any documentation / suggestions / reference CLs on how to quickstart?

345ishaan commented 1 year ago

Update: I was getting familiarized with the paper and author's code. I also checked how other models are integrated into diffuser's pipeline for inference only mode, so plan to do the same for VQ-Diffusion as next step using original code impln.

pcuenca commented 1 year ago

That's awesome, @345ishaan! Let us know if you need any help :)

williamberman commented 1 year ago

Hello, super sorry wasn't aware someone was already working on this! I ported the VQVAE for the ITHQ dataset. Would love to help contribute if possible :)

I put up a draft PR https://github.com/huggingface/diffusers/pull/658 for the VQVAE with docs on how to compare it against VQ-diffusion. Is the standard to wait until the whole pipeline is complete before merging anything, or is it ok to incrementally merge functionality? I.e. for VQ-diffusion, it might be easier to get the individual autoencoders to work one at a time in their own commits before moving on to the rest of the model.

Any advice is appreciated, thanks!

345ishaan commented 1 year ago

Hmm ok, if you have crossed the finish line, then go ahead! I was mostly working on adding the implentation to diffusers in inference mode. If you need any help further, happy to collaborate.

Going forward, what is the best way to avoid such overlaps? I thought it was via proposing/updating through issues.

williamberman commented 1 year ago

@345ishaan definitely not over the finish line, just ported the autoencoder for one of the models! Happy to collaborate :)

345ishaan commented 1 year ago

SG! I will check your CL. Do you want to chat over discord?

williamberman commented 1 year ago

@cientgu @zzctan

Could I have some help parsing q_posterior?

https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/image_synthesis/modeling/transformers/diffusion_transformer.py#L235-L267

I believe it's computing equation 11 in log space, but I still have a few questions. I understand it's adapted from https://github.com/ehoogeboom/multinomial_diffusion/blob/9d907a60536ad793efd6d2a6067b3c3d6ba9fce7/diffusion_utils/diffusion_multinomial.py#L171-L193 which provides the initial derivation that makes sense.

        # q(xt-1 | xt, x0) = q(xt | xt-1, x0) * q(xt-1 | x0) / q(xt | x0)
        # where q(xt | xt-1, x0) = q(xt | xt-1).

However, the later comment is a bit vague :)

        # Note: _NOT_ x_tmin1, which is how the formula is typically used!!!
        # Not very easy to see why this is true. But it is :)
        unnormed_logprobs = log_EV_qxtmin_x0 + self.q_pred_one_timestep(log_x_t, t)

Because it seems like the actual equation it's using is q(xt+1 | xt) * q(xt-1 | x0) / q(xt | x0).

Additional questions,

  1. Some context on how you're handling masks in q_posterior would be helpful
  2. What is the summation over in equation 11 and how does it map to q_posterior?
  3. I don't see an analog for the lines starting from 262 onward in multinomial diffusion, could you provide some additional context there as well?

Lmk if any of that wasn't clear, thank you!

345ishaan commented 1 year ago

@williamberman I will be able to take some tasks today and tomorrow. I just checked your CL, it seems like you ported the vq-vae encoder there. Do you want to chat over discord to split tasks? My username is 345ishaan#9676

williamberman commented 1 year ago

pinged you in discord @345ishaan!

Zeqiang-Lai commented 1 year ago

@cientgu @zzctan

Could I have some help parsing q_posterior?

https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/image_synthesis/modeling/transformers/diffusion_transformer.py#L235-L267

I believe it's computing equation 11 in log space, but I still have a few questions. I understand it's adapted from https://github.com/ehoogeboom/multinomial_diffusion/blob/9d907a60536ad793efd6d2a6067b3c3d6ba9fce7/diffusion_utils/diffusion_multinomial.py#L171-L193 which provides the initial derivation that makes sense.

        # q(xt-1 | xt, x0) = q(xt | xt-1, x0) * q(xt-1 | x0) / q(xt | x0)
        # where q(xt | xt-1, x0) = q(xt | xt-1).

However, the later comment is a bit vague :)

        # Note: _NOT_ x_tmin1, which is how the formula is typically used!!!
        # Not very easy to see why this is true. But it is :)
        unnormed_logprobs = log_EV_qxtmin_x0 + self.q_pred_one_timestep(log_x_t, t)

Because it seems like the actual equation it's using is q(xt+1 | xt) * q(xt-1 | x0) / q(xt | x0).

Additional questions,

  1. Some context on how you're handling masks in q_posterior would be helpful
  2. What is the summation over in equation 11 and how does it map to q_posterior?
  3. I don't see an analog for the lines starting from 262 onward in multinomial diffusion, could you provide some additional context there as well?

Lmk if any of that wasn't clear, thank you!

Have you figure out questions here ? I am also confused about that the actual computation seems to be q(xt+1 | xt) * q(xt-1 | x0) / q(xt | x0)

williamberman commented 1 year ago

Hey @Zeqiang-Lai I did actually figure out what was going on here!

This class is heavily commented https://github.com/huggingface/diffusers/blob/e0a2bd15f9a1eb0d48a69973a9c7ddb4eabb1a27/src/diffusers/schedulers/scheduling_vq_diffusion.py#L299

I reverse engineered it through trial and error and a lot of whiteboard markers!

I don't remember all of it exactly, but the main components are doing the calculation in log space for numerical stability and avoiding a logspace matmul for which there's no memory efficient pytorch kernel. A few of the other components are just cheeky linear algebra.

I later discovered that there's an explanation in the appendix of the multinomial diffusion paper. I didn't read it exhaustively but from skimming, it looks like it's on similar material. https://arxiv.org/pdf/2102.05379.pdf

image
williamberman commented 1 year ago

@Zeqiang-Lai if you have any other questions on the math, feel free to shoot me an email wlbberman@gmail.com