NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.61k stars 256 forks source link

[PyTorch] Make sure RoPE frequencies are in FP32 #875

Closed timmoon10 closed 1 month ago

timmoon10 commented 1 month ago

Description

Our RoPE kernels only support FP32 RoPE tensors, which has resulted in runtime errors when BF16 RoPE tensors are provided. This PR just casts the RoPE tensors to FP32 as needed. If these casts add too much overhead, a more robust solution would be to modify the kernels to handle more dtypes. In that case, we should reimplement the kernels with NVRTC to avoid compiling a bunch of extra kernels.

Type of change

Changes

Checklist:

timmoon10 commented 1 month ago

/te-ci pytorch