Closed dacorvo closed 2 months ago
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
The TGI tests are failing because I need to remove the test llama neuron model under the optimum
org. But if I do that before the pr is merged, it will break the CI for all other pull-requests.
What does this PR do?
This modifies the default Neuron configuration when exporting Llama models for inference, setting the attention layout to "BSH" instead of "HSB".
This configuration has almost no impact on the token generation time (a.k.a
decode
), and significantly reduces the context encoding time (a.k.aprefill
) for Llama2-7b and Llama3-8B.Benchmarks updates:
In the process, the TGI router version is bumped to 2.0.2.