bigscience-workshop / petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
9.27k stars 524 forks source link

Is there any plan to support MoE models like Mixtral8×7B? #548

Closed arronKler closed 7 months ago

arronKler commented 11 months ago

Like the title says, do we have any plan or do we have the ability to support MoE models like Mixtral8×7B?

mryab commented 11 months ago

Hi! We definitely have the ability to support Mixtral and other MoE models (Hivemind, the library for decentralized DL used by Petals, was initially designed for mixtures-of-experts), but currently the team does not have enough bandwidth to implement them in Petals right away. I might have some time over the holidays to work on it, but if you (or someone else from the community) is willing to contribute that, it will probably be much faster

fakerybakery commented 11 months ago

+1

gaborkukucska commented 10 months ago

+1

frburrue commented 10 months ago

Hello,

I'm trying to implement Mixtral8x7B following this guide: https://github.com/bigscience-workshop/petals/wiki/Run-a-custom-model-with-Petals

I have some doubts when implementing the block.py and model.py files. Could you give me some support?

I would be very interested in contributing to the project.

Thank you.

artek0chumak commented 8 months ago

Hello!

We added and merged support for Mixtral models: https://github.com/bigscience-workshop/petals/pull/553.

Just update servers for the new version of the petals.