Open okpatil4u opened 7 months ago
Are there some reference trained models somewhere? I haven't been able to find any so far.
Apparently this one trains a 54M parameter mode from scratch.
https://github.com/pranavjad/tinyllama-bitnet
And this one is a pretty good technique for quantization which retains the model performance. They have also released the model weights.
https://mobiusml.github.io/1bit_blog/
What is more interesting to me is the replacement of matrix multiplication with addition leading to significant performance gains.
And the official models are here
Not sure how close to complete this is but @tomsanbear has put up bitnet-rs which seems to be a candle implementation of this archicecture.
Thanks @LaurentMazare, this is super helpful.
@LaurentMazare official implementation is out https://github.com/microsoft/BitNet/tree/main
Would it possible to implement 1.58 bit quantization on candle ? It was proposed in the following paper,
https://arxiv.org/pdf/2402.17764.pdf
The main inspiration behind using 1.58 bit implementation is that you could replace matrix multiplication with addition. If that is feasible, with apple accelerate framework's SIMD instructions, we could expect better training and inference on large language models.
A couple of Llama.cpp discussions here
https://github.com/ggerganov/llama.cpp/issues/5761 https://github.com/ggerganov/llama.cpp/pull/5999
There is also a training library which was released a couple of days ago, https://github.com/rafacelente/bllama
Any thoughts ?