Closed MrAta closed 11 months ago
The slowdown you report seems reasonable considering that 8 A100 GPUs on a single computer are connected via NVLink or the PCIe bus, and both have considerably more bandwidth than a 100Gb/s NIC which you used for 2 nodes. One note I would like to add is that the training speed is calculated using the formula: average it/s local batch size number of GPUs across all nodes. So your speed drop in percentage is not 3/18 but 6/18.
I think you may be confusing allreduce and Torchrec's KJTAllToAll. Allreduce would be used in the context of averaging gradients where there is model replication (data parallelism). It's not used for model parallel parts such as sharded embedding tables, because there aren't any replicated trainable parameters to need their gradients to be averaged and made the same on all replicas.
Torchrec uses a sharding planner to pick the sharding plan it considers best. Regarding NCCL algorithms like Ring and Tree, the NCCL repo would be a better place to ask or read about the tradeoffs of different settings. I think prior issues may cover the topic, such as this issue.
Hi @samiwilf, thanks for the insights.
The reason I mentioned allreduce is that when you zoom into the profile under sparse_data_dist
name scope, there are a bunch of Allreduce
calls:
Some Send/Recives can be seen as well, but I was wondering what are those allreduce calls?
Another question that I had regarding the sparse_data_dist
was that based on the code, it seems that this part is the initial feature distribution among ranks for the next steps which is supposed to be pipelined. However, from the profiling results, it seems that it happens right between the forward pass and the backward pass of the current step which make things look sequential rather than pipeline:
Am I missing something here? I would appreciate any insight on how to map the profile to the pipeline code?
I would recommend performing an ablation study of the code and profiling iteratively to pinpoint which parts of the model or training loop correspond to the parts of the profile in question. You could try switching to TorchRec's nightly build, since sparse_data_dist has been replaced with start_sparse_data_dist and wait_sparse_data_dist. It may provide a more granular and informative profile. Lastly, although TorchRec is used for the embedding tables, the bottom and top MLPs are still ordinary PyTorch DDP modules. They use allreduce for gradient averaging.
I am using torchrec version. With a single node and 8 A100 GPUs the training speed is roughly
18 it/s
; however, when using 2 nodes (each with 8 A100 GPUs again), the training speed reduces to about3 it/s
. The 2 nodes are connected through a 100Gb/s NIC.Upon further investigation using the pytroch profiler, it turns out the slowness is coming from the all reducing during sparse_data_dist:
Using single node, the
sparse_data_dist
takes 28ms:Using two nodes with
TREE
allreducesparse_data_dist
takes 246ms:Using two nodes with
RING
allreducesparse_data_dist
takes 254ms::I wonder if there are any best practices to optimize the allreduce time with torchrec? As an example knob, horovod allows to treat sparse tensors as dense tensors during all reduce, since the number of NCCL CC calls are less when allreducing sparse tensors.