bigscience-workshop / petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
8.89k stars 490 forks source link

Is there any plan to support MoE models like Mixtral8×7B? #548

Closed arronKler closed 2 months ago

arronKler commented 6 months ago

Like the title says, do we have any plan or do we have the ability to support MoE models like Mixtral8×7B?

mryab commented 6 months ago

Hi! We definitely have the ability to support Mixtral and other MoE models (Hivemind, the library for decentralized DL used by Petals, was initially designed for mixtures-of-experts), but currently the team does not have enough bandwidth to implement them in Petals right away. I might have some time over the holidays to work on it, but if you (or someone else from the community) is willing to contribute that, it will probably be much faster

fakerybakery commented 6 months ago

+1

gaborkukucska commented 5 months ago

+1

frburrue commented 5 months ago

Hello,

I'm trying to implement Mixtral8x7B following this guide: https://github.com/bigscience-workshop/petals/wiki/Run-a-custom-model-with-Petals

I have some doubts when implementing the block.py and model.py files. Could you give me some support?

I would be very interested in contributing to the project.

Thank you.

artek0chumak commented 3 months ago

Hello!

We added and merged support for Mixtral models: https://github.com/bigscience-workshop/petals/pull/553.

Just update servers for the new version of the petals.