epfLLM / Megatron-LLM

distributed trainer for LLMs
Other
529 stars 76 forks source link

Megatron-LLM

This library enables pre-training and fine-tuning of large language models (LLMs) at scale. Our repository is a modification of the original Megatron-LM codebase by Nvidia.

Added key features include:

Documentation

Take a look at the online documentation.

Alternatively, build the docs from source:

cd docs/
pip install -r requirements.txt
make html

Example models trained with Megatron-LLM

(Let us know about yours!)

Citation

If you use this software please cite it:

@software{epfmgtrn,
  author       = {Alejandro Hernández Cano  and
                  Matteo Pagliardini  and
                  Andreas Köpf  and
                  Kyle Matoba  and
                  Amirkeivan Mohtashami  and
                  Xingyao Wang  and
                  Olivia Simin Fan  and
                  Axel Marmet  and
                  Deniz Bayazit  and
                  Igor Krawczuk  and
                  Zeming Chen  and
                  Francesco Salvi  and
                  Antoine Bosselut  and
                  Martin Jaggi},
  title        = {epfLLM Megatron-LLM},
  year         = 2023,
  url          = {https://github.com/epfLLM/Megatron-LLM}
}