A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
Hi! When I want to replace the regular attention calculation with context-parallel DotProductAttention, I find that the results of DotProductAttention are influenced by different random seeds, and the outputs are not completely aligned. How can I resolve this situation?
Hi! When I want to replace the regular attention calculation with context-parallel DotProductAttention, I find that the results of DotProductAttention are influenced by different random seeds, and the outputs are not completely aligned. How can I resolve this situation?