A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
Our RoPE kernels only support FP32 RoPE tensors, which has resulted in runtime errors when BF16 RoPE tensors are provided. This PR just casts the RoPE tensors to FP32 as needed. If these casts add too much overhead, a more robust solution would be to modify the kernels to handle more dtypes. In that case, we should reimplement the kernels with NVRTC to avoid compiling a bunch of extra kernels.
Type of change
[ ] Documentation change (change only to the documentation, either a fix or a new content)
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Description
Our RoPE kernels only support FP32 RoPE tensors, which has resulted in runtime errors when BF16 RoPE tensors are provided. This PR just casts the RoPE tensors to FP32 as needed. If these casts add too much overhead, a more robust solution would be to modify the kernels to handle more dtypes. In that case, we should reimplement the kernels with NVRTC to avoid compiling a bunch of extra kernels.
Type of change
Changes
Checklist: