dvmazur / mixtral-offloading

Run Mixtral-8x7B models in Colab or consumer desktops
MIT License
2.28k stars 223 forks source link

Can this be used for Jambo inference #30

Open freQuensy23-coder opened 3 months ago

freQuensy23-coder commented 3 months ago

Can I use this solution for inference https://huggingface.co/ai21labs/Jamba-v0.1/discussions with offloading mamba moe layers?

Jambo it SOTA open source long context model and its support would be very useful for this library.

dvmazur commented 3 months ago

Hey, @freQuensy23-coder! The code in this repo is quite transformer-moe specific. I'm not too familiar with mamba-like architectures, but afaik @lavawolfiee has plans for adapting Jamba to work with our offloading strategy.