ikawrakow / ik_llama.cpp

llama.cpp clone with additional SOTA quants and improved CPU performance
MIT License
57 stars 4 forks source link

iq2_tn: slightly better performance on AVX2 #47

Closed ikawrakow closed 1 week ago

ikawrakow commented 1 week ago

We get PP-512 = 545 t/s for the 4B TriLM model compared to PP-512 = 498 t/s on the main branch (on a Ryzen-5975WX). TG is not affected.

It is possible to increase PP-512 performance to 600 t/s by representing IQ2_TN as a row scale + IQ1_BN packed quants, and reusing the IQ2_BN implementation, see the iq2_tn_as_iq2_bn branch. The issue with the iq2_tn_as_iq2_bn implementation is that TG performance on the Ryzen-5975WX saturates at about 38 t/s, while here we have 50.5 t/s. So, preferring this change for now, perhaps I can sort out where the TG bottleneck is in iq2_tn_as_iq2_bn later.