huggingface / candle

Minimalist ML framework for Rust
Apache License 2.0
15.86k stars 960 forks source link

1.58 bit implementation #1956

Open okpatil4u opened 7 months ago

okpatil4u commented 7 months ago

Would it possible to implement 1.58 bit quantization on candle ? It was proposed in the following paper,

https://arxiv.org/pdf/2402.17764.pdf

The main inspiration behind using 1.58 bit implementation is that you could replace matrix multiplication with addition. If that is feasible, with apple accelerate framework's SIMD instructions, we could expect better training and inference on large language models.

A couple of Llama.cpp discussions here

https://github.com/ggerganov/llama.cpp/issues/5761 https://github.com/ggerganov/llama.cpp/pull/5999

There is also a training library which was released a couple of days ago, https://github.com/rafacelente/bllama

Any thoughts ?

LaurentMazare commented 7 months ago

Are there some reference trained models somewhere? I haven't been able to find any so far.

okpatil4u commented 7 months ago

Apparently this one trains a 54M parameter mode from scratch.

https://github.com/pranavjad/tinyllama-bitnet

And this one is a pretty good technique for quantization which retains the model performance. They have also released the model weights.

https://mobiusml.github.io/1bit_blog/

What is more interesting to me is the replacement of matrix multiplication with addition leading to significant performance gains.

okpatil4u commented 7 months ago

And the official models are here

https://huggingface.co/1bitLLM/bitnet_b1_58-3B

LaurentMazare commented 7 months ago

Not sure how close to complete this is but @tomsanbear has put up bitnet-rs which seems to be a candle implementation of this archicecture.

okpatil4u commented 7 months ago

Thanks @LaurentMazare, this is super helpful.

akashicMarga commented 2 weeks ago

@LaurentMazare official implementation is out https://github.com/microsoft/BitNet/tree/main