huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.24k stars 27.06k forks source link

Implement LlamaGen for Image Generation #33905

Open ighoshsubho opened 1 month ago

ighoshsubho commented 1 month ago

Feature request

Add support for LlamaGen, an autoregressive image generation model, to the Transformers library. LlamaGen applies the next-token prediction paradigm of large language models to visual generation.

Paper: https://arxiv.org/abs/2406.06525 Code: https://github.com/FoundationVision/LlamaGen

Key components to implement:

  1. Image tokenizer
  2. Autoregressive image generation model (based on Llama architecture)
  3. Class-conditional and text-conditional image generation
  4. Classifier-free guidance for sampling

Motivation

LlamaGen demonstrates that vanilla autoregressive models without vision-specific inductive biases can achieve state-of-the-art image generation performance. Implementing it in Transformers would enable easier experimentation and integration with existing language models.

Your contribution

I can help by contributing to this model, and provide examples and detailed explanations of the model architecture and training process if needed.

SOGeKING-NUL commented 1 month ago

This looks like an incredible feature Shubo! Please allow me to work with you on this for my open sourced contribution for hacktoberfest.

LysandreJik commented 1 month ago

Thanks for the request! cc @qubvel, @molbap, what do you think?

qubvel commented 1 month ago

Very interesting! As far as I know, we don't have image-generation models in transformers yet or am I missing it? So, wondering where is the better place for such a model, in transformers or in diffusers (it's not a diffusion model though). cc @sayakpaul maybe

zucchini-nlp commented 1 month ago

Hey! Just saw this issue, and I've been working/reviewing some VLM models that can generate image or text from image+text. TBH we have only ImageGPT as a very old architecture for image generation, very similar to llama-gen iiuc. And two more PRs are open for VLM with image generation: Chameleon's decoder VQ-VAE support which got stale due to contributor being busy and Emu3 which I hopefully can work on in the next weeks

I like Llama-Gen and I think it can be a nice addition. From what I see the model doesn't take image as input, so no inpainting or other tasks, only generation from text. It shouldn't be hard to fit in the general model API. Do we need to have any controlled structured generation for ex: limit tokens to be generated to a specific subset and length? Would be super nice if that kind of control can be done with existing LogitsProcessors, adding new processors is gonna add more maintainment burden to us

sayakpaul commented 1 month ago

Very interesting discussion here.

My personal opinion is if the image generation process is more auto-regressive in nature (for which transformers already has nice foundations), it makes sense to keep them inside of transformers.

diffusers houses models that follow some kind of denoising in the overall generation workflow. Only pipeline that is not based on diffusion or rectified-flow in diffusers is aMUSEd (an open reproduction of MUSE). However, it still has an iterative denoising schedule (in the form of masking). Broadly speaking, our generation workflow is abstracted through a DiffusionPipeline which stitches together the different model-level components:

But I will also let @yiyixuxu chime in here.

GargDivanshu commented 1 month ago

interesting problem here. I would like to collaborate with @ighoshsubho on this !

qubvel commented 1 month ago

Thanks, everyone, for the discussion! It seems like we've agreed that transformers will be a good place to implement this model. @zucchini-nlp, thanks for sharing the reference models, could you please also link any merged or ongoing PRs? I believe that would be super helpful for understanding patterns for implementation!

zucchini-nlp commented 1 month ago

These two PRs might help but they have a lot of extra logic specific for interleaving image and text. I would say the closest one is ImageGPT, so LlamaGen can be implementing in a similar way :) https://github.com/huggingface/transformers/pull/32013 https://github.com/huggingface/transformers/pull/33770

GargDivanshu commented 1 month ago

@qubvel @ighoshsubho have you guys started off with the implementation for this model yet ? If yes and if you are okay, I would like to help you out with it

ighoshsubho commented 1 month ago

@qubvel @ighoshsubho have you guys started off with the implementation for this model yet ? If yes and if you are okay, I would like to help you out with it

not yet, was busy with something else, will start implementing this soon

deepwilson commented 1 month ago

@ighoshsubho would like to contribute as well. please suggest how I can help.

leloykun commented 3 weeks ago

Hi all!

I have a tracker issue here for all the image-text in-and-out models: https://github.com/huggingface/transformers/issues/32926

If I missed anything, please leave a comment!

I've also started work for Chameleon & Anole here: https://github.com/huggingface/transformers/pull/32013

I've recently just rebased it to main and the remaining errors seem to be unrelated to the PR (i.e. Flax T5 failing even tho I never touched it). I think the PR would be a good starting point for this and related models.

Please help me out!