huggingface / optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Apache License 2.0
176 stars 51 forks source link

Improve llama models performance #587

Closed dacorvo closed 2 months ago

dacorvo commented 2 months ago

What does this PR do?

This modifies the default Neuron configuration when exporting Llama models for inference, setting the attention layout to "BSH" instead of "HSB".

This configuration has almost no impact on the token generation time (a.k.a decode), and significantly reduces the context encoding time (a.k.a prefill) for Llama2-7b and Llama3-8B.

Benchmarks updates:

In the process, the TGI router version is bumped to 2.0.2.

HuggingFaceDocBuilderDev commented 2 months ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

dacorvo commented 2 months ago

The TGI tests are failing because I need to remove the test llama neuron model under the optimum org. But if I do that before the pr is merged, it will break the CI for all other pull-requests.