ikawrakow / ik_llama.cpp

llama.cpp fork with additional SOTA quants and improved performance
MIT License
89 stars 6 forks source link

iq2_tn: slightly faster PP on Zen4 #43

Closed ikawrakow closed 2 months ago

ikawrakow commented 2 months ago

With this change we get PP512 = 494 t/s (using flash attention), up from 468 t/s (~5% improvement) running on a Ryzen-7950X CPU.

Compared to the initial IQ2_TN PR #13 the cumulative improvement is 15%.

Compared to TQ2_0 in llama.cpp, which has now been merged, we are now 80% faster.