A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
This PR extends the JIT-compilation on the fused cast transpose kernels using CUDA Driver API (nvrtc), which enables performance optimizations otherwise not available during runtime, and reduces the size of the TE binary libtransformer_engine.so by 4.5% (from 292M to 279M).
The performance was benchmarked for the cast_transpose_dbias and cast_transpose_dbias_dgelu kernels for the matrix size 2048x12288 and different combinations of input types. The results are provided in the table below:
NOTES:
Original - fused cast transpose kernels before the commit #884
Template - current state
JIT - proposed by this PR
Benchmarked on H100 HBM3
Time measured in microseconds
Performance of the kernels remains the same, with some marginal changes (~2-6%) of nvte functions runtime.
Type of change
[ ] Documentation change (change only to the documentation, either a fix or a new content)
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Changes
Modified the logic in the cast_transpose_fusion.cu
Added the corresponding source file into the transformer_engine/common/transpose/rtc folder
Adjusted the CMake for generating a string header from that file
Description
This PR extends the JIT-compilation on the fused cast transpose kernels using CUDA Driver API (nvrtc), which enables performance optimizations otherwise not available during runtime, and reduces the size of the TE binary![JIT-kernels](https://github.com/NVIDIA/TransformerEngine/assets/64355998/ef941b74-5082-4f44-ada4-f140084c2a30)
libtransformer_engine.so
by 4.5% (from 292M to 279M). The performance was benchmarked for thecast_transpose_dbias
andcast_transpose_dbias_dgelu
kernels for the matrix size2048x12288
and different combinations of input types. The results are provided in the table below:NOTES:
Performance of the kernels remains the same, with some marginal changes (~2-6%) of nvte functions runtime.
Type of change
Changes
cast_transpose_fusion.cu
transformer_engine/common/transpose/rtc
folderChecklist: