pytorch-labs / gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
BSD 3-Clause "New" or "Revised" License
5.58k stars 509 forks source link

Will these optimization integrate into hf's code? #9

Open lucasjinreal opened 10 months ago

lucasjinreal commented 10 months ago

so that every one can use it out-of-box?

aniketmaurya commented 10 months ago

Most of these features are already supported in Lit-GPT (if you're looking for finetuning LLMs) and more of this will be supported soon. You can use LLMs from HF model hub.

SunMarc commented 10 months ago

Thanks for the interest ! We already support most of the optimization described here:

Chillee commented 10 months ago

@SunMarc I think there might still be some gaps in how the kv-cache is handled during inference. Specifically, the link you sent is about vision models, not text generation.

We should chat more about this - i'd love to see the techniques here integrated.

SunMarc commented 10 months ago

Yes, absolutely! cc @younesbelkada for visibility

yhyu13 commented 10 months ago

These opt should already in hf. Moreover, some specific opt made for hardware like writing your cuda knerl for GPTQ and paged attention (e.g. flash_attn2) would make inference even faster.

https://github.com/turboderp/exllamav2 has bench marked llama-7b with 190+ t/s on single 3090Ti which matches this repo on 8xA100, but 3090Ti is only about 1/3 flops of a single A100. So hardware opt also plays as another drive.

lucasjinreal commented 10 months ago

Hi, does torch.complie works with AWQ?

(seems hf already supports AWQ, but quantization way might not same as this repo)

How to enable speculative decoding in hf?

Chillee commented 10 months ago

@yhyu13

https://github.com/turboderp/exllamav2 has bench marked llama-7b with 190+ t/s on single 3090Ti which matches this repo on 8xA100, but 3090Ti is only about 1/3 flops of a single A100.

To be clear, the benchmark on this repo is at 197 t/s on a single A100 with a groupsize of 32, while exllamav2 is running a single 4090 with a groupsize of 128.

Still certainly very good results from exllamav2 :)