ikawrakow / ik_llama.cpp

llama.cpp fork with additional SOTA quants and improved performance
MIT License
89 stars 6 forks source link

Faster IQ1_BN Metal implementation #107

Closed ikawrakow closed 2 weeks ago

ikawrakow commented 2 weeks ago

On my 30-core M2-Max TG-128 for Bitnet-1.58b-3.3B improves from 82 t/s to 94.7 t/s. PP-512 goes from 686 t/s to 702 t/s.

Integer multiplications are expensive, so the trick used is to replace them with shifts and additions.

There is also a minor IQ2_BN PP-512 improvement (710 -> 714 t/s).