ELS-RD / kernl

Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
http://www.kernl.ai
Apache License 2.0
1.53k stars 95 forks source link
cuda cuda-kernel pytorch transformer triton

Kernl logo


Tests

Kernl lets you run Pytorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.

benchmarks ran on a 3090 RTX

Kernl is the first OSS inference engine written in CUDA C OpenAI Triton, a new language designed by OpenAI to make it easier to write GPU kernels.
Each kernel is less than 200 lines of code, and is easy to understand and modify.

Tutorials - End to End Use Cases

A list of Examples contains how to use kernl with Pytorch.

Topic Notebook
Tiled matmul: matrix multiplication implementation in CUDA style link
Matmul offsets: detailed explanations related to a performance trick used in Triton matmul implementation link
Online softmax: parallelized softmax computation, a key ingredient of Flash Attention link
Flash Attention: attention computation without saving attention matrix to global memory link
XNLI classification: classification with / without optimizations (Roberta + XNLI classification task) link
Text generation: with/without optimizations (T5) link
Transcription generation: with/without optimizations (Whisper) link
**Llama version 2 optimization by kernel fusion link

Installation

IMPORTANT: This package requires pytorch being installed.
Please install it first.

pip install 'git+https://github.com/ELS-RD/kernl'
# or for local dev, after git clone ...
pip install -e .

This project requires Python >= 3.9. Furthermore, the library requires an Ampere GPU and CUDA to be installed.

If you prefer Docker:

# build
DOCKER_BUILDKIT=1 docker build -t kernl .
# run
docker run --rm -it --gpus all -v $(pwd):/kernl kernl

Getting started

import torch
from transformers import AutoModel
from kernl.model_optimization import optimize_model

model = AutoModel.from_pretrained("model_name").eval().cuda()
optimize_model(model)

inputs = ...

with torch.inference_mode(), torch.cuda.amp.autocast():
    outputs = model(**inputs)

For end-to-end use cases, you may want to check:

Test and Benchmark

Conventions

Run tests and benchmarks

# tada!
pytest

There are over 2K benchmarks, and they take a while to run.

Some rules on how PyTest works, in particular for benchmarks:

WARNING: param:X will make PyTest crash if X is not a parameter of at least one of the function ran.

Some useful commands:

# only benchmarks
pytest -k benchmark
# no benchmarks
pytest -k "not benchmark"
# only linear layers benchmark, group by shape and if the input is contiguous or not 
pytest test/test_linear_layer.py --benchmark-group-by fullfunc,param:shape,param:contiguous

Create new patterns to replace fx graph nodes

The first step to replace function/module calls in the graph is to create the pattern that will be replaced. The easiest way to do this is to convert the model to a fx graph, and then print it with utils.graph_report or by printing the code print(you_graph_module.code)

Then you can use replace_pattern to replace the pattern in the graph. We have our own version of replace_pattern with some enhancements to work with modules, for example. You can find examples of that in optimizer folder.

Code Formatting

We use black / isort / flake8 to format the code. You can run them with:

make source_code_format
make source_code_check_format

Why?

At Lefebvre Sarrut, we run several transformers in production, some of them being latency sensitive (search and recsys mostly).

We are using OnnxRuntime and TensorRT and even created transformer-deploy an OSS library to share our knowledge with the community.
Recently, we were testing generative languages, and we tried to accelerate them. It proves very difficult with traditional tools.

Basically, and to make it short, it seems to us that Onnx (the main format to feed those tools) is an interesting format with a wide range support of hardware.

However, its ecosystem (and mostly inference engines) has several limitations when we deal with new LLM architectures :

One thing very annoying is the fact that new models are never accelerated, you need to wait for someone to write custom CUDA kernels for that.

It’s not to say the solutions are bad, one big thing with OnnxRuntime is its multi hardware support.
Regarding TensorRT, it’s really fast.

So we wanted something as fast as TensorRT and on Python / PyTorch, that’s why we built Kernl.

How?

The simple rule is memory bandwidth is often the bottleneck in deep learning, to accelerate inference, memory access reduction is usually a good strategy. On short input sequence, the bottleneck is often related to the CPU overhead, it has to be removed too. Counterintuitively, to make things faster, you don’t need to be faster in computation.

We leverage mostly 3 technologies:

Acknowledgments

Code of OpenAI Triton kernels takes inspiration from examples from OpenAI Triton tutorials or xformers library.

Contributing

If you would like to contribute, for example to code or documentation, please see our contribution guide.

Code of Conduct

Please see our Code of Conduct for any questions about the community we are trying to build and what to do if you need help with someone who is acting unprofessionally.