kyegomez / LIMoE

Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts"
https://discord.gg/47ENfJQjMq
MIT License
19 stars 2 forks source link

How to train #3

Closed littlexinyi closed 3 months ago

littlexinyi commented 6 months ago

Nice Work! Would you please tell us how to use the concatenated feature (after several MOE Transformer blocks) to train models by multi-modal contrastive learning? how to use the contrastive loss to achieve modality alignment and retrieval? Where are the auxiliary losses?

Upvote & Fund

Fund with Polar

github-actions[bot] commented 6 months ago

Hello there, thank you for opening an Issue ! 🙏🏻 The team was notified and they will get back to you asap.

github-actions[bot] commented 4 months ago

Stale issue message

AshleyLuo001 commented 2 months ago

I'm wondering the same thing.