bigscience-workshop / petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
9.11k stars 512 forks source link

Hide excess key message #476

Closed borzunov closed 1 year ago

borzunov commented 1 year ago

Before:

Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0, _IncompatibleKeys(missing_keys=[], unexpected_keys=['self_attn.rotary_emb.inv_freq'])

After:

Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0

Hiding this since the excess keys in Llama-based models are okay since the latest transformers release.