turboderp / exllama

A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
MIT License
2.74k stars 215 forks source link

Performance issues #263

Open bryanhpchiang opened 1 year ago

bryanhpchiang commented 1 year ago

Have you tried this yet?

https://github.com/InternLM/lmdeploy

On my initial testing for 7B and 13B models there's a noticeable per-token latency improvement (measured in time to generate the first 5 tokens).

Ph0rk0z commented 1 year ago

Uses AWQ. I wonder about perplexity and memory performance of that format vs GPTQ.

bryanhpchiang commented 1 year ago

The paper has more details but memory usage and task performance don’t seem degraded at all.

On Tue, Aug 29 2023 at 6:07 AM, Forkoz < @.*** > wrote:

Uses AWQ. I wonder about perplexity and memory performance of that format vs GPTQ.

— Reply to this email directly, view it on GitHub ( https://github.com/turboderp/exllama/issues/263#issuecomment-1697412126 ) , or unsubscribe ( https://github.com/notifications/unsubscribe-auth/AXMEJMDNE3V75F53WBJUDITXXXSRDANCNFSM6AAAAAA4CRG72Y ). You are receiving this because you authored the thread. Message ID: <turboderp/exllama/issues/263/1697412126 @ github. com>

Ph0rk0z commented 1 year ago

The paper probably doesn't compare optimized exllama at 64G. Remember the SPQR paper doing similar. Noticed a lot of authors do very favorable results in their graphs and creatively omit projects which compete. Then they are suddenly the best thing since sliced bread but in reality it's not so.

The real-real of this will come out with multi GPU 70b not 7b. Ideally should add AWQ into textgen and then see how their default implementation does. When I first saw it, I think it was incomplete and then I forgot about it.