axolotl-ai-cloud / axolotl

Go ahead and axolotl questions
https://axolotl-ai-cloud.github.io/axolotl/
Apache License 2.0
8.03k stars 887 forks source link

Support Fuyu-8B #777

Open dongxiaolong opened 1 year ago

dongxiaolong commented 1 year ago

⚠️ Please check that this feature request hasn't been suggested before.

πŸ”– Feature description

Fuyu-8B is a multi-modal text and image transformer trained by Adept AI.

Architecturally, Fuyu is a vanilla decoder-only transformer - there is no image encoder. Image patches are instead linearly projected into the first layer of the transformer, bypassing the embedding lookup. We simply treat the transformer decoder like an image transformer (albeit with no pooling and causal attention). See the below diagram for more details.

βœ”οΈ Solution

Multimodal training support.

❓ Alternatives

No response

πŸ“ Additional Context

No response

Acknowledgements

winglian commented 1 year ago

Hi @dongxiaolong, Fuyu is interesting in that we can simply feed chunks of arbitrarily sized images into the model. I also have multimodal using Llava on the roadmap, but if we can understand better how Fuyu works, it might be feasible to plan to architect multimodal training for both methodologies. Is this something you'd be interested in helping to do?

dongxiaolong commented 1 year ago

Hi @dongxiaolong, Fuyu is interesting in that we can simply feed chunks of arbitrarily sized images into the model. I also have multimodal using Llava on the roadmap, but if we can understand better how Fuyu works, it might be feasible to plan to architect multimodal training for both methodologies. Is this something you'd be interested in helping to do?

Hi @winglian,

I noticed the Fuyu Finetuning Example. It appears that the fine-tuning details for Fuyu are currently being developed under the transformers library. You can find more information in the transformers' Fuyu directory.

As of now, I'm not aware of any additional details regarding the model's training that have been shared publicly. It might be beneficial to reach out to the original authors for further insights.

If there's any way I can assist, please let me know. I have access to two A100 GPUs that could be utilized for this purpose.

Stillerman commented 1 year ago

I am interested in assisting in this

NanoCode012 commented 8 months ago

With the addition of the Fuyu to transformers, axolotl should inherently support it. sample_packing and flash_attention would not work as their modeling code does not support it.