NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.61k stars 256 forks source link

[JAX] Added unit tests for distributed LayernormMLP #878

Closed phu0ngng closed 3 weeks ago

phu0ngng commented 1 month ago

This PR added unit tests for distributed fused_layernorm_fp8_mlp and LayerNormMLP. The outputs of the run on multiple GPUs are compared again to the one with a single GPU for correctness check.

Type of change

Checklist:

phu0ngng commented 1 month ago

\te-ci jax