bigscience-workshop / petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
9.27k stars 524 forks source link

Add mistral to chat.petals #527

Closed AmgadHasan closed 1 year ago

AmgadHasan commented 1 year ago

Can you please add one of the finetuned mistral models to the chat service?

It's a very capable model and is a good option for those who want fast responses

borzunov commented 1 year ago

Hi @AmgadHasan,

Mistral-7B is a relatively small model that can be run on most consumer GPUs if you load it in 4-bit or 8-bit, so we think it doesn't make sense to add it to Petals (it'll work slower than if you run it locally).

Let us know if you have any other questions.