issues
search
bigscience-workshop
/
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
9.27k
stars
524
forks
source link
Add Mixtral models
#553
Closed
artek0chumak
closed
8 months ago
artek0chumak
commented
10 months ago
Add
Mixtral models
as runnable models.
TODO
[x] Wait for support of the new version of transformers (>4.35)
[x] Add
normal
support for
Cache
[x] Check compatibility of larger models (
TinyMixtral
,
Mixtral-8x7B
) on GPU
[x] Refactor some code(see
# TODO
)
[ ] Optimize code for improved speed (mb next PR?)
Add Mixtral models as runnable models.
TODO
# TODO
)