NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.85k stars 309 forks source link

[Question] Why Tensor parallel communication/GEMM overlap can happen only when sequence parallelism is enabled? #746

Open hxdtest opened 6 months ago

hxdtest commented 6 months ago

In Megatron, I find that the check for tp_comm_overlap and sequence_parallel

if args.tp_comm_overlap:         
        assert args.sequence_parallel == True, 'Tensor parallel communication/GEMM overlap can happen only when sequence parallelism is enabled'

But why?

ptrendx commented 5 months ago

That is because we currently only support AllGather/ReduceScatter overlapping with GEMM (and those communication types are used when sequence parallelism is enabled, as opposed to AllReduce which is being used in the other cases).