ctranslate2

Fast inference engine for Transformer models

Showing:

Popularity

Downloads/wk

0

GitHub Stars

189

Maintenance

Last Commit

8hrs ago

Contributors

14

Package

Dependencies

1

License

MIT

Categories

Readme

CI PyPI version Gitter

CTranslate2

CTranslate2 is a fast and full-featured inference engine for Transformer models. It aims to provide comprehensive inference features and be the most efficient and cost-effective solution to deploy standard neural machine translation systems on CPU and GPU. It currently supports Transformer models trained with OpenNMT-py, OpenNMT-tf, and Fairseq.

The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.

Table of contents

  1. Key features
  2. Quickstart
  3. Installation
  4. Converting models
  5. Translating
  6. Environment variables
  7. Building
  8. Testing
  9. Benchmarks
  10. Frequently asked questions

Key features

  • Fast and efficient execution on CPU and GPU
    The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks.
  • Quantization and reduced precision
    The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers, and 8-bit integers.
  • Multiple CPU architectures support
    The project supports x86-64 and ARM64 processors and integrates multiple backends that are optimized for these platforms: Intel MKL, oneDNN, OpenBLAS, Ruy, and Apple Accelerate.
  • Automatic CPU detection and code dispatch
    One binary can include multiple backends (e.g. Intel MKL and oneDNN) and instruction set architectures (e.g. AVX, AVX2) that are automatically selected at runtime based on the CPU information.
  • Parallel and asynchronous translations
    Translations can be run efficiently in parallel and asynchronously using multiple GPUs or CPU cores.
  • Dynamic memory usage
    The memory usage changes dynamically depending on the request size while still meeting performance requirements thanks to caching allocators on both CPU and GPU.
  • Lightweight on disk
    Models can be quantized below 100MB with minimal accuracy loss. A full featured Docker image supporting GPU and CPU requires less than 500MB (with CUDA 10.0).
  • Simple integration
    The project has few dependencies and exposes translation APIs in Python and C++ to cover most integration needs.
  • Interactive decoding
    Advanced decoding features allow autocompleting a partial translation and returning alternatives at a specific location in the translation.

Some of these features are difficult to achieve with standard deep learning frameworks and are the motivation for this project.

Supported decoding options

The translation API supports several decoding options:

  • decoding with greedy or beam search
  • random sampling from the output distribution
  • translating with a known target prefix
  • returning alternatives at a specific location in the target
  • constraining the decoding length
  • returning multiple translation hypotheses
  • returning attention vectors
  • approximating the generation using a pre-compiled vocabulary map
  • replacing unknown target tokens by source tokens with the highest attention
  • biasing translations towards a given prefix see section 4.2
  • scoring existing translations

See the Decoding documentation for examples.

Quickstart

The steps below assume a Linux OS and a Python installation (3.6 or above).

1. Install the Python package:

pip install --upgrade pip
pip install ctranslate2

2. Convert a trained Transformer model, for example one of the pretrained OpenNMT Transformer models (choose one of the two models):

a. OpenNMT-py

pip install OpenNMT-py

wget https://s3.amazonaws.com/opennmt-models/transformer-ende-wmt-pyOnmt.tar.gz
tar xf transformer-ende-wmt-pyOnmt.tar.gz

ct2-opennmt-py-converter --model_path averaged-10-epoch.pt --output_dir ende_ctranslate2

b. OpenNMT-tf

pip install OpenNMT-tf

wget https://s3.amazonaws.com/opennmt-models/averaged-ende-ckpt500k-v2.tar.gz
tar xf averaged-ende-ckpt500k-v2.tar.gz

ct2-opennmt-tf-converter --model_path averaged-ende-ckpt500k-v2 --output_dir ende_ctranslate2 \
    --src_vocab averaged-ende-ckpt500k-v2/wmtende.vocab \
    --tgt_vocab averaged-ende-ckpt500k-v2/wmtende.vocab \
    --model_type TransformerBase

3. Translate tokenized inputs, for example with the Python API:

import ctranslate2
translator = ctranslate2.Translator("ende_ctranslate2/")
translator.translate_batch([["▁H", "ello", "▁world", "!"]])

Installation

Python package

Python packages are published on PyPI for Linux and macOS:

pip install ctranslate2

To translate on GPU you should install the CUDA 11.x toolkit. The macOS version only supports CPU execution.

Requirements:

  • OS: Linux, macOS
  • Python version: >= 3.6
  • pip version: >= 19.3
  • (optional) CUDA version: 11.x
  • (optional) GPU driver version: >= 450.80.02

Docker images

The opennmt/ctranslate2 repository contains images with prebuilt libraries and clients:

docker pull opennmt/ctranslate2:latest-ubuntu20.04-cuda11.2

The library is installed in /opt/ctranslate2 and a Python package is installed on the system.

Requirements:

  • Docker
  • (optional) GPU driver version: >= 450.80.02

Manual compilation

See Building.

Converting models

The core CTranslate2 implementation is framework agnostic. The framework specific logic is moved to a conversion step that serializes trained models into a simple binary format.

The following frameworks and models are currently supported:

OpenNMT-tfOpenNMT-pyFairseq
Transformer (Vaswani et al. 2017)
+ relative position representations (Shaw et al. 2018)

If you are using a model that is not listed above, consider opening an issue to discuss future integration.

The Python package includes a conversion API and conversion scripts:

  • ct2-opennmt-py-converter
  • ct2-opennmt-tf-converter
  • ct2-fairseq-converter

The conversion should be run in the same environment as the selected training framework.

Integrated model conversion

Models can also be converted directly from the supported training frameworks. See their documentation:

Quantization and reduced precision

The converters support reducing the weights precision to save on space and possibly accelerate the model execution. See the Quantization documentation.

Adding converters

Each converter should populate a model specification with trained weights coming from an existing model. The model specification declares the variable names and layout expected by the CTranslate2 core engine.

See the existing converters implementation which could be used as a template.

Translating

The examples use the English-German model converted in the Quickstart. This model requires a SentencePiece tokenization.

With the translation client

echo "▁H ello ▁world !" | docker run --gpus=all -i --rm -v $PWD:/data \
    opennmt/ctranslate2:latest-ubuntu20.04-cuda11.2 --model /data/ende_ctranslate2 --device cuda

See docker run --rm opennmt/ctranslate2:latest-ubuntu20.04-cuda11.2 --help for additional options.

With the Python API

import ctranslate2
translator = ctranslate2.Translator("ende_ctranslate2/")
translator.translate_batch([["▁H", "ello", "▁world", "!"]])

See the Python reference for more advanced usages.

With the C++ API

#include <iostream>
#include <ctranslate2/translator.h>

int main() {
  ctranslate2::Translator translator("ende_ctranslate2/");
  ctranslate2::TranslationResult result = translator.translate({"▁H", "ello", "▁world", "!"});

  for (const auto& token : result.output())
    std::cout << token << ' ';
  std::cout << std::endl;
  return 0;
}

See the Translator class for more advanced usages, and the TranslatorPool class for running translations in parallel and asynchronously.

Environment variables

Some environment variables can be configured to customize the execution:

  • CT2_CUDA_ALLOCATOR: Select the CUDA memory allocator. Possible values are: cub_caching (default), cuda_malloc_async (requires CUDA >= 11.2).
  • CT2_CUDA_ALLOW_FP16: Allow using FP16 computation on GPU even if the device does not have efficient FP16 support.
  • CT2_CUDA_CACHING_ALLOCATOR_CONFIG: Tune the CUDA caching allocator (see Performance).
  • CT2_FORCE_CPU_ISA: Force CTranslate2 to select a specific instruction set architecture (ISA). Possible values are: GENERIC, AVX, AVX2. Note: this does not impact backend libraries (such as Intel MKL) which usually have their own environment variables to configure ISA dispatching.
  • CT2_TRANSLATORS_CORE_OFFSET: If set to a non negative value, parallel translators are pinned to cores in the range [offset, offset + inter_threads]. Requires compilation with -DOPENMP_RUNTIME=NONE.
  • CT2_USE_EXPERIMENTAL_PACKED_GEMM: Enable the packed GEMM API for Intel MKL (see Performance).
  • CT2_USE_MKL: Force CTranslate2 to use (or not) Intel MKL. By default, the runtime automatically decides whether to use Intel MKL or not based on the CPU vendor.
  • CT2_VERBOSE: Configure the logs verbosity:
    • -3 = off
    • -2 = critical
    • -1 = error
    • 0 = warning (default)
    • 1 = info
    • 2 = debug
    • 3 = trace

When using Python, these variables should be set before importing the ctranslate2 module, e.g.:

import os
os.environ["CT2_VERBOSE"] = "1"

import ctranslate2

Building

Docker images

The Docker images build all translation clients presented in Translating. The build command should be run from the project root directory, e.g.:

docker build -t opennmt/ctranslate2:latest-ubuntu20.04-cuda11.2 -f docker/Dockerfile .

See the docker/ directory for available images.

Build options

The project uses CMake for compilation. The following options can be set with -DOPTION=VALUE:

CMake optionAccepted values (default in bold)Description
BUILD_CLIOFF, ONCompiles the translation clients
BUILD_TESTSOFF, ONCompiles the tests
CMAKE_CXX_FLAGScompiler flagsDefines additional compiler flags
CUDA_DYNAMIC_LOADINGOFF, ONEnables the dynamic loading of CUDA libraries at runtime instead of linking against them. Requires Linux and CUDA >= 11.
ENABLE_CPU_DISPATCHOFF, ONCompiles CPU kernels for multiple ISA and dispatches at runtime (should be disabled when explicitly targeting an architecture with the -march compilation flag)
ENABLE_PROFILINGOFF, ONEnables the integrated profiler (usually disabled in production builds)
OPENMP_RUNTIMEINTEL, COMP, NONESelects or disables the OpenMP runtime (INTEL: Intel OpenMP; COMP: OpenMP runtime provided by the compiler; NONE: no OpenMP runtime)
WITH_CUDAOFF, ONCompiles with the CUDA backend
WITH_DNNLOFF, ONCompiles with the oneDNN backend (a.k.a. DNNL)
WITH_MKLOFF, ONCompiles with the Intel MKL backend
WITH_ACCELERATEOFF, ONCompiles with the Apple Accelerate backend
WITH_OPENBLASOFF, ONCompiles with the OpenBLAS backend
WITH_RUYOFF, ONCompiles with the Ruy backend

Some build options require external dependencies:

  • -DWITH_MKL=ON requires:
  • -DWITH_DNNL=ON requires:
  • -DWITH_ACCELERATE=ON requires:
  • -DWITH_OPENBLAS=ON requires:
  • -DWITH_CUDA=ON requires:

Multiple backends can be enabled for a single build. When building with both Intel MKL and oneDNN, the backend will be selected at runtime based on the CPU information.

Example (Ubuntu)

Install Intel MKL (optional for GPU only builds)

Use the following instructions to install Intel MKL:

wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo sh -c 'echo "deb https://apt.repos.intel.com/oneapi all main" > /etc/apt/sources.list.d/oneAPI.list'
sudo apt-get update
sudo apt-get install intel-oneapi-mkl-devel

See the Intel MKL documentation for other installation methods.

Install CUDA (optional for CPU only builds)

See the NVIDIA documentation for information on how to download and install CUDA.

Compile

Under the project root, run the following commands:

git submodule update --init --recursive
mkdir build && cd build
cmake -DWITH_MKL=ON -DWITH_CUDA=ON ..
make -j4

(If you did not install one of Intel MKL or CUDA, set its corresponding flag to OFF in the CMake command line.)

These steps should produce the cli/translate binary. You can try it with the model converted in the Quickstart section:

$ echo "▁H ello ▁world !" | ./cli/translate --model ende_ctranslate2/ --device auto
▁Hallo ▁Welt !

Testing

C++

To enable the tests, you should configure the project with cmake -DBUILD_TESTS=ON. The binary tests/ctranslate2_test runs all tests using Google Test. It expects the path to the test data as argument:

./tests/ctranslate2_test ../tests/data

Python

# Install the CTranslate2 library.
cd build && make install && cd ..

# Build and install the Python wheel.
cd python
pip install -r install_requirements.txt
python setup.py bdist_wheel
pip install dist/*.whl

# Run the tests with pytest.
pip install -r tests/requirements.txt
pytest tests/test.py

Depending on your build configuration, you might need to set LD_LIBRARY_PATH if missing libraries are reported when running tests/test.py.

Benchmarks

We compare CTranslate2 with OpenNMT-py and OpenNMT-tf on their pretrained English-German Transformer models (available on the website). For this benchmark, CTranslate2 models are using the weights of the OpenNMT-py model.

Model size

Model size
OpenNMT-py542MB
OpenNMT-tf367MB
CTranslate2364MB
- int16187MB
- float16182MB
- int8100MB
- int8 + float1695MB

CTranslate2 models are generally lighter and can go as low as 100MB when quantized to int8. This also results in a fast loading time and noticeable lower memory usage during runtime.

Results

We translate the test set newstest2014 and report:

  • the number of target tokens generated per second (higher is better)
  • the maximum memory usage (lower is better)
  • the BLEU score of the detokenized output (higher is better)

See the directory tools/benchmark for more details about the benchmark procedure and how to run it. Also see the Performance document to further improve CTranslate2 performance.

Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.

CPU

Tokens per secondMax. memoryBLEU
OpenNMT-tf 2.19.0 (with TensorFlow 2.5.0)364.12620MB26.93
OpenNMT-py 2.1.2 (with PyTorch 1.9.0)472.61856MB26.77
- int8510.41712MB26.80
CTranslate2 2.3.01182.31037MB26.77
- int161532.0954MB26.83
- int81785.2810MB26.86
- int8 + vmap2263.4692MB26.70

Executed with 8 threads on a c5.metal Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.

GPU

Tokens per secondMax. GPU memoryMax. CPU memoryBLEU
OpenNMT-tf 2.19.0 (with TensorFlow 2.5.0)1815.22660MB1724MB26.93
OpenNMT-py 2.1.2 (with PyTorch 1.9.0)1536.73046MB2987MB26.77
CTranslate2 2.3.03696.71234MB555MB26.77
- int85201.9946MB565MB26.82
- float165303.5818MB607MB26.75
- int8 + float165824.3722MB566MB26.88

Executed with CUDA 11 on a g4dn.xlarge Amazon EC2 instance equipped with a NVIDIA T4 GPU (driver version: 460.73.01).

Frequently asked questions

How does it relate to the original CTranslate project?

The original CTranslate project shares a similar goal which is to provide a custom execution engine for OpenNMT models that is lightweight and fast. However, it has some limitations that were hard to overcome:

  • a strong dependency on LuaTorch and OpenNMT-lua, which are now both deprecated in favor of other toolkits;
  • a direct reliance on Eigen, which introduces heavy templating and a limited GPU support.

CTranslate2 addresses these issues in several ways:

  • the core implementation is framework agnostic, moving the framework specific logic to a model conversion step;
  • the internal operators follow the ONNX specifications as much as possible for better future-proofing;
  • the call to external libraries (Intel MKL, cuBLAS, etc.) occurs as late as possible in the execution to not rely on a library specific logic.

What is the state of this project?

The implementation has been generously tested in production environment so people can rely on it in their application. The project versioning follows Semantic Versioning 2.0.0. The following APIs are covered by backward compatibility guarantees:

  • Converted models
  • Python converters options
  • Python symbols:
    • ctranslate2.Translator
    • ctranslate2.converters.FairseqConverter
    • ctranslate2.converters.OpenNMTPyConverter
    • ctranslate2.converters.OpenNMTTFConverter
  • C++ symbols:
    • ctranslate2::models::Model
    • ctranslate2::TranslationOptions
    • ctranslate2::TranslationResult
    • ctranslate2::Translator
    • ctranslate2::TranslatorPool
  • C++ translation client options

Other APIs are expected to evolve to increase efficiency, genericity, and model support.

Why and when should I use this implementation instead of PyTorch or TensorFlow?

Here are some scenarios where this project could be used:

  • You want to accelarate standard translation models for production usage, especially on CPUs.
  • You need to embed translation models in an existing C++ application without adding large dependencies.
  • Your application requires custom threading and memory usage control.
  • You want to reduce the model size on disk and/or memory.

However, you should probably not use this project when:

  • You want to train custom architectures not covered by this project.
  • You see no value in the key features listed at the top of this document.

What hardware is supported?

CPU

CTranslate2 supports x86-64 and ARM64 processors. It includes optimizations for AVX, AVX2, and NEON and supports multiple BLAS backends that should be selected based on the target platform (see Building).

Prebuilt binaries are designed to run on any x86-64 processors supporting at least SSE 4.2. The binaries implement runtime dispatch to select the best backend and instruction set architecture (ISA) for the platform. In particular, they are compiled with both Intel MKL and oneDNN so that Intel MKL is only used on Intel processors where it performs best, whereas oneDNN is used on other x86-64 processors such as AMD.

GPU

CTranslate2 supports NVIDIA GPUs with a Compute Capability greater or equal to 3.5.

The driver requirement depends on the CUDA version. See the CUDA Compatibility guide for more information.

What are the known limitations?

The current approach only exports the weights from existing models and redefines the computation graph via the code. This implies a strong assumption of the graph architecture executed by the original framework.

We could ease this assumption by supporting ONNX as model parts.

What are the future plans?

There are many ways to make this project better and even faster. See the open issues for an overview of current and planned features. Here are some things we would like to get to:

  • Support of running ONNX graphs

What is the difference between intra_threads and inter_threads?

  • intra_threads is the number of OpenMP threads that is used per translation: increase this value to decrease the latency.
  • inter_threads is the maximum number of CPU translations executed in parallel: increase this value to increase the throughput. Even though the model data are shared, this execution mode will increase the memory usage as some internal buffers are duplicated for thread safety.

The total number of computing threads launched by the process is summarized by this formula:

num_threads = inter_threads * intra_threads

Note that these options are only defined for CPU translation and are forced to 1 when executing on GPU. Parallel translations on GPU require multiple GPUs. See the option device_index that accepts multiple device IDs.

Do you provide a translation server?

The OpenNMT-py REST server is able to serve CTranslate2 models. See the code integration to learn more.

How do I generate a vocabulary mapping file?

The vocabulary mapping file (a.k.a. vmap) maps source N-grams to a list of target tokens. During translation, the target vocabulary will be dynamically reduced to the union of all target tokens associated with the N-grams from the batch to translate.

It is a text file where each line has the following format:

src_1 src_2 ... src_N<TAB>tgt_1 tgt_2 ... tgt_K

If the source N-gram is empty (N = 0), the assiocated target tokens will always be included in the reduced vocabulary.

See here for an example on how to generate this file. The file can then be passed to the converter script to be included in the model directory (see option --vocab_mapping) and can be used during translation after enabling the use_vmap translation option.

Rate & Review

Great Documentation0
Easy to Use0
Performant0
Highly Customizable0
Bleeding Edge0
Responsive Maintainers0
Poor Documentation0
Hard to Use0
Slow0
Buggy0
Abandoned0
Unwelcoming Community0
100
No reviews found
Be the first to rate

Alternatives

No alternatives found

Tutorials

No tutorials found
Add a tutorial