Open ighoshsubho opened 1 month ago
This looks like an incredible feature Shubo! Please allow me to work with you on this for my open sourced contribution for hacktoberfest.
Thanks for the request! cc @qubvel, @molbap, what do you think?
Very interesting! As far as I know, we don't have image-generation models in transformers yet or am I missing it? So, wondering where is the better place for such a model, in transformers
or in diffusers
(it's not a diffusion model though).
cc @sayakpaul maybe
Hey! Just saw this issue, and I've been working/reviewing some VLM models that can generate image or text from image+text. TBH we have only ImageGPT as a very old architecture for image generation, very similar to llama-gen iiuc. And two more PRs are open for VLM with image generation: Chameleon's decoder VQ-VAE support which got stale due to contributor being busy and Emu3 which I hopefully can work on in the next weeks
I like Llama-Gen and I think it can be a nice addition. From what I see the model doesn't take image as input, so no inpainting or other tasks, only generation from text. It shouldn't be hard to fit in the general model API. Do we need to have any controlled structured generation for ex: limit tokens to be generated to a specific subset and length? Would be super nice if that kind of control can be done with existing LogitsProcessors, adding new processors is gonna add more maintainment burden to us
Very interesting discussion here.
My personal opinion is if the image generation process is more auto-regressive in nature (for which transformers
already has nice foundations), it makes sense to keep them inside of transformers
.
diffusers
houses models that follow some kind of denoising in the overall generation workflow. Only pipeline that is not based on diffusion or rectified-flow in diffusers
is aMUSEd (an open reproduction of MUSE). However, it still has an iterative denoising schedule (in the form of masking). Broadly speaking, our generation workflow is abstracted through a DiffusionPipeline
which stitches together the different model-level components:
But I will also let @yiyixuxu chime in here.
interesting problem here. I would like to collaborate with @ighoshsubho on this !
Thanks, everyone, for the discussion! It seems like we've agreed that transformers
will be a good place to implement this model. @zucchini-nlp, thanks for sharing the reference models, could you please also link any merged or ongoing PRs? I believe that would be super helpful for understanding patterns for implementation!
These two PRs might help but they have a lot of extra logic specific for interleaving image and text. I would say the closest one is ImageGPT, so LlamaGen can be implementing in a similar way :) https://github.com/huggingface/transformers/pull/32013 https://github.com/huggingface/transformers/pull/33770
@qubvel @ighoshsubho have you guys started off with the implementation for this model yet ? If yes and if you are okay, I would like to help you out with it
@qubvel @ighoshsubho have you guys started off with the implementation for this model yet ? If yes and if you are okay, I would like to help you out with it
not yet, was busy with something else, will start implementing this soon
@ighoshsubho would like to contribute as well. please suggest how I can help.
Hi all!
I have a tracker issue here for all the image-text in-and-out models: https://github.com/huggingface/transformers/issues/32926
If I missed anything, please leave a comment!
I've also started work for Chameleon & Anole here: https://github.com/huggingface/transformers/pull/32013
I've recently just rebased it to main and the remaining errors seem to be unrelated to the PR (i.e. Flax T5 failing even tho I never touched it). I think the PR would be a good starting point for this and related models.
Please help me out!
Feature request
Add support for LlamaGen, an autoregressive image generation model, to the Transformers library. LlamaGen applies the next-token prediction paradigm of large language models to visual generation.
Paper: https://arxiv.org/abs/2406.06525 Code: https://github.com/FoundationVision/LlamaGen
Key components to implement:
Motivation
LlamaGen demonstrates that vanilla autoregressive models without vision-specific inductive biases can achieve state-of-the-art image generation performance. Implementing it in Transformers would enable easier experimentation and integration with existing language models.
Your contribution
I can help by contributing to this model, and provide examples and detailed explanations of the model architecture and training process if needed.