Welcome to Megatron-LLM’s documentation!#

_images/llama-falcon.png

The Megatron-LLM library enables pre-training and fine-tuning of large language models (LLMs) at scale. Our repository is a modification of the original Megatron-LM codebase by Nvidia.

Added key features include:

  • architectures supported: LLaMa, LLaMa 2, Falcon, Code Llama and `Mistral https://arxiv.org/abs/2310.06825`_.

  • support training of large models (70B Llama 2, 65B Llama 1, 34B Code Llama, 40B Falcon and Mistral) on commodity hardware on multiple nodes

  • 3-way parallelism: tensor parallel, pipeline parallel and data parallel training (inherited from Megatron)

  • full pretraining, finetuning and instruct tuning support

  • Support for special tokens & tokenizers

  • grouped-query attention (GQA) and multi-query attention (MQA)

  • Rotary Position Embeddings (RoPE), RMS layer norm, Lima dropout

  • RoPE scaling for longer attention context support

  • FlashAttention 2

  • BF16 / FP16 training

  • WandB integration

  • Metrics support: Ease to add custom metrics to evaluate on the validation set while training

  • Conversion to and from Hugging Face hub

Example models trained with Megatron-LLM: See README.

User guide#

For information on installation and usage, take a look at our user guide.

API#

Detailed information about Megatron-LLM components:

Citation#

If you use this software please cite it:

@software{epfmgtrn,
  author       = {Alejandro Hernández Cano  and
                  Matteo Pagliardini  and
                  Andreas Köpf  and
                  Kyle Matoba  and
                  Amirkeivan Mohtashami  and
                  Xingyao Wang  and
                  Olivia Simin Fan  and
                  Axel Marmet  and
                  Deniz Bayazit  and
                  Igor Krawczuk  and
                  Zeming Chen  and
                  Francesco Salvi  and
                  Antoine Bosselut  and
                  Martin Jaggi},
  title        = {epfLLM Megatron-LLM},
  year         = 2023,
  url          = {https://github.com/epfLLM/Megatron-LLM}
}