AIoT-MLSys-Lab / Efficient-LLMs-Survey

[TMLR 2024] Efficient Large Language Models: A Survey
https://arxiv.org/abs/2312.03863
970 stars 82 forks source link
efficient-deep-learning generative-ai large-language-models machine-learning-systems survey

Efficient Large Language Models: A Survey

Efficient Large Language Models: A Survey [arXiv] (Version 1: 12/06/2023; Version 2: 12/23/2023; Version 3: 01/31/2024; Version 4: 05/23/2024, camera ready version of Transactions on Machine Learning Research)

Zhongwei Wan1, Xin Wang1, Che Liu2, Samiul Alam1, Yu Zheng3, Jiachen Liu4, Zhongnan Qu5, Shen Yan6, Yi Zhu7, Quanlu Zhang8, Mosharaf Chowdhury4, Mi Zhang1

1The Ohio State University, 2Imperial College London, 3Michigan State University, 4University of Michigan, 5Amazon AWS AI, 6Google Research, 7Boson AI, 8Microsoft Research Asia

⚡News: Our survey has been officially accepted by Transactions on Machine Learning Research (TMLR), May 2024. Camera ready version is available at: [OpenReview]

@article{wan2023efficient,
title={Efficient large language models: A survey},
author={Wan, Zhongwei and Wang, Xin and Liu, Che and Alam, Samiul and Zheng, Yu and others},
journal={arXiv preprint arXiv:2312.03863},
volume={1},
year={2023},
publisher={no}
}

❤️ Community Support

This repository is maintained by tuidan (wang.15980@osu.edu), SUSTechBruce (wan.512@osu.edu), samiul272 (alam.140@osu.edu), and mi-zhang (mizhang.1@osu.edu). We welcome feedback, suggestions, and contributions that can help improve this survey and repository so as to make them valuable resources to benefit the entire community.

We will actively maintain this repository by incorporating new research as it emerges. If you have any suggestions regarding our taxonomy, find any missed papers, or update any preprint arXiv paper that has been accepted to some venue, feel free to send us an email or submit a pull request using the following markdown format.

Paper Title, <ins>Conference/Journal/Preprint, Year</ins>  [[pdf](link)] [[other resources](link)].

📌 What is This Survey About?

Large Language Models (LLMs) have demonstrated remarkable capabilities in many important tasks and have the potential to make a substantial impact on our society. Such capabilities, however, come with considerable resource demands, highlighting the strong need to develop effective techniques for addressing the efficiency challenges posed by LLMs. In this survey, we provide a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from model-centric, data-centric, and framework-centric perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.

🤔 Why Efficient LLMs are Needed?

img/image.jpg

Although LLMs are leading the next wave of AI revolution, the remarkable capabilities of LLMs come at the cost of their substantial resource demands. Figure 1 (left) illustrates the relationship between model performance and model training time in terms of GPU hours for LLaMA series, where the size of each circle is proportional to the number of model parameters. As shown, although larger models are able to achieve better performance, the amounts of GPU hours used for training them grow exponentially as model sizes scale up. In addition to training, inference also contributes quite significantly to the operational cost of LLMs. Figure 2 (right) depicts the relationship between model performance and inference throughput. Similarly, scaling up the model size enables better performance but comes at the cost of lower inference throughput (higher inference latency), presenting challenges for these models in expanding their reach to a broader customer base and diverse applications in a cost-effective way. The high resource demands of LLMs highlight the strong need to develop techniques to enhance the efficiency of LLMs. As shown in Figure 2, compared to LLaMA-1-33B, Mistral-7B, which uses grouped-query attention and sliding window attention to speed up inference, achieves comparable performance and much higher throughput. This superiority highlights the feasibility and significance of designing efficiency techniques for LLMs.

📖 Table of Content

🤖 Model-Centric Methods

Model Compression

Quantization

Post-Training Quantization
Weight-Only Quantization
Evaluation of Post-Training Quantization

Efficient Fine-Tuning

Parameter-Efficient Fine-Tuning

Adapter-based Tuning

🔢 Data-Centric Methods

Data Selection

Data Selection for Efficient Pre-Training

🧑‍💻 System-Level Efficiency Optimization and LLM Frameworks

System-Level Efficiency Optimization

System-Level Pre-Training Efficiency Optimization

System-Level Serving Efficiency Optimization

Serving System Design
Serving Performance Optimization

Algorithm-Hardware Co-Design

LLM Frameworks

| | Efficient Training | Efficient Inference | Efficient Fine-Tuning | | :-------------------------------------------------------------------- | :------------------: | :---------------------: | :--: | | DeepSpeed [[Code](https://github.com/microsoft/DeepSpeed)] | ✅ | ✅ | ✅ | | Megatron [[Code](https://github.com/NVIDIA/Megatron-LM)] | ✅ | ✅ | ✅ | | ColossalAI [[Code](https://github.com/hpcaitech/ColossalAI)] | ✅ | ✅ | ✅ | | Nanotron [[Code](https://github.com/huggingface/nanotron)] | ✅ | ✅ | ✅ | | MegaBlocks [[Code](https://github.com/databricks/megablocks)] | ✅ | ✅ | ✅ | | FairScale [[Code](https://github.com/facebookresearch/fairscale)] | ✅ | ✅ | ✅ | | Pax [[Code](https://github.com/google/paxml/)] | ✅ | ✅ | ✅ | | Composer [[Code](https://github.com/mosaicml/composer)] | ✅ | ✅ | ✅ | | OpenLLM [[Code](https://github.com/bentoml/OpenLLM)] | ❌ | ✅ | ✅ | | LLM-Foundry [[Code](https://github.com/mosaicml/llm-foundry)] | ❌ | ✅ | ✅ | | vLLM [[Code](https://github.com/vllm-project/vllm)] | ❌ | ✅ | ❌ | | TensorRT-LLM [[Code](https://github.com/NVIDIA/TensorRT-LLM)] | ❌ | ✅ | ❌ | | TGI [[Code](https://github.com/huggingface/text-generation-inference)]| ❌ | ✅ | ❌ | | RayLLM [[Code](https://github.com/ray-project/ray-llm)] | ❌ | ✅ | ❌ | | MLC LLM [[Code](https://github.com/mlc-ai/mlc-llm)] | ❌ | ✅ | ❌ | | Sax [[Code](https://github.com/google/saxml)] | ❌ | ✅ | ❌ | | Mosec [[Code](https://github.com/mosecorg/mosec)] | ❌ | ✅ | ❌ |