Open dongxiaolong opened 1 year ago
Hi @dongxiaolong, Fuyu is interesting in that we can simply feed chunks of arbitrarily sized images into the model. I also have multimodal using Llava on the roadmap, but if we can understand better how Fuyu works, it might be feasible to plan to architect multimodal training for both methodologies. Is this something you'd be interested in helping to do?
Hi @dongxiaolong, Fuyu is interesting in that we can simply feed chunks of arbitrarily sized images into the model. I also have multimodal using Llava on the roadmap, but if we can understand better how Fuyu works, it might be feasible to plan to architect multimodal training for both methodologies. Is this something you'd be interested in helping to do?
Hi @winglian,
I noticed the Fuyu Finetuning Example. It appears that the fine-tuning details for Fuyu are currently being developed under the transformers library. You can find more information in the transformers' Fuyu directory.
As of now, I'm not aware of any additional details regarding the model's training that have been shared publicly. It might be beneficial to reach out to the original authors for further insights.
If there's any way I can assist, please let me know. I have access to two A100 GPUs that could be utilized for this purpose.
I am interested in assisting in this
With the addition of the Fuyu to transformers, axolotl should inherently support it. sample_packing
and flash_attention
would not work as their modeling code does not support it.
β οΈ Please check that this feature request hasn't been suggested before.
π Feature description
Fuyu-8B is a multi-modal text and image transformer trained by Adept AI.
Architecturally, Fuyu is a vanilla decoder-only transformer - there is no image encoder. Image patches are instead linearly projected into the first layer of the transformer, bypassing the embedding lookup. We simply treat the transformer decoder like an image transformer (albeit with no pooling and causal attention). See the below diagram for more details.
βοΈ Solution
Multimodal training support.
β Alternatives
No response
π Additional Context
No response
Acknowledgements