intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.73k stars 1.27k forks source link

llama 3.1/3.2 support compresskv #12347

Closed cyita closed 2 weeks ago

cyita commented 2 weeks ago

Description

llama 3.1/3.2 support compresskv.

1. Why the change?

2. User API changes

3. Summary of the change

4. How to test?

5. New dependencies

cyita commented 2 weeks ago

For llama3.2 1b, I will disable compresskv since the performance improvement is not obvious.

cyita commented 2 weeks ago

PR validation: https://github.com/intel-analytics/ipex-llm-workflow/actions/runs/11700267684