OpenNMT / CTranslate2

Fast inference engine for Transformer models
https://opennmt.net/CTranslate2
MIT License
3.26k stars 287 forks source link
avx avx2 cpp cuda deep-learning deep-neural-networks gemm inference intrinsics machine-translation mkl neon neural-machine-translation onednn openmp opennmt parallel-computing quantization thrust transformer-models

CI PyPI version Documentation Gitter Forum

CTranslate2

CTranslate2 is a C++ and Python library for efficient inference with Transformer models.

The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.

The following model types are currently supported:

Compatible models should be first converted into an optimized model format. The library includes converters for multiple frameworks:

The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.

Key features

Some of these features are difficult to achieve with standard deep learning frameworks and are the motivation for this project.

Installation and usage

CTranslate2 can be installed with pip:

pip install ctranslate2

The Python module is used to convert models and can translate or generate text with few lines of code:

translator = ctranslate2.Translator(translation_model_path)
translator.translate_batch(tokens)

generator = ctranslate2.Generator(generation_model_path)
generator.generate_batch(start_tokens)

See the documentation for more information and examples.

Benchmarks

We translate the En->De test set newstest2014 with multiple models:

The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.

Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.

CPU

Tokens per second Max. memory BLEU
OpenNMT-tf WMT14 model
OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0) 209.2 2653MB 26.93
OpenNMT-py WMT14 model
OpenNMT-py 3.0.4 (with PyTorch 1.13.1) 275.8 2012MB 26.77
- int8 323.3 1359MB 26.72
CTranslate2 3.6.0 658.8 849MB 26.77
- int16 733.0 672MB 26.82
- int8 860.2 529MB 26.78
- int8 + vmap 1126.2 598MB 26.64
OPUS-MT model
Transformers 4.26.1 (with PyTorch 1.13.1) 147.3 2332MB 27.90
Marian 1.11.0 344.5 7605MB 27.93
- int16 330.2 5901MB 27.65
- int8 355.8 4763MB 27.27
CTranslate2 3.6.0 525.0 721MB 27.92
- int16 596.1 660MB 27.53
- int8 696.1 516MB 27.65

Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.

GPU

Tokens per second Max. GPU memory Max. CPU memory BLEU
OpenNMT-tf WMT14 model
OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0) 1483.5 3031MB 3122MB 26.94
OpenNMT-py WMT14 model
OpenNMT-py 3.0.4 (with PyTorch 1.13.1) 1795.2 2973MB 3099MB 26.77
FasterTransformer 5.3 6979.0 2402MB 1131MB 26.77
- float16 8592.5 1360MB 1135MB 26.80
CTranslate2 3.6.0 6634.7 1261MB 953MB 26.77
- int8 8567.2 1005MB 807MB 26.85
- float16 10990.7 941MB 807MB 26.77
- int8 + float16 8725.4 813MB 800MB 26.83
OPUS-MT model
Transformers 4.26.1 (with PyTorch 1.13.1) 1022.9 4097MB 2109MB 27.90
Marian 1.11.0 3241.0 3381MB 2156MB 27.92
- float16 3962.4 3239MB 1976MB 27.94
CTranslate2 3.6.0 5876.4 1197MB 754MB 27.92
- int8 7521.9 1005MB 792MB 27.79
- float16 9296.7 909MB 814MB 27.90
- int8 + float16 8362.7 813MB 766MB 27.90

Executed with CUDA 11 on a g5.xlarge Amazon EC2 instance equipped with a NVIDIA A10G GPU (driver version: 510.47.03).

Additional resources