facebookresearch / multimodal

TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
BSD 3-Clause "New" or "Revised" License
1.44k stars 138 forks source link

Add support for LLaVA model #482

Open youssefadr opened 12 months ago

youssefadr commented 12 months ago

🚀 The feature, motivation and pitch

LLaVA seems to be currently a strong open-source competitor to GPT4-V, it doesn't seem to be supported by the library. Do you plan on adding it? If yes, is there something I could contribute with to help?

Alternatives

No response

Additional context

No response

ebsmothers commented 12 months ago

Hi @youssefadr, thanks for opening this issue. LLaVA is definitely something we're interested in adding and we would be happy to have you contribute. Is there a specific portion of the model you're especially interested in helping out with?

youssefadr commented 12 months ago

Thanks for your answer @ebsmothers, I would like to add the model to torchmultimodal/models first.

ebsmothers commented 12 months ago

That sounds reasonable to me. We already have CLIP visual encoders in the library here, so feel free to reuse those. Then the bulk of the work for the model should be to add the LLM. A couple pointers to help with that: TransformerDecoderLayer, RMSNorm. We also have an open PR for rotary positional embeddings (#450) that might be useful. Let me know if this makes sense, happy to provide more details as needed.

youssefadr commented 12 months ago

Nice ! I'll come back to you with more questions later, not sure I'll start working on it this week.

theadamsabra commented 7 months ago

@youssefadr have you worked on this to any capacity? i'm interested in picking this up if not

ebsmothers commented 7 months ago

@theadamsabra if not, you are more than welcome to take it up

theadamsabra commented 7 months ago

@ebsmothers thanks! If I don't get a response by tomorrow I'll just pick it up myself