facebookresearch / dlrm

An implementation of a deep learning recommendation model (DLRM)
MIT License
3.71k stars 825 forks source link

torchrec: Super slow allreduce in multi-node multi-gpu setting #356

Closed MrAta closed 11 months ago

MrAta commented 1 year ago

I am using torchrec version. With a single node and 8 A100 GPUs the training speed is roughly 18 it/s; however, when using 2 nodes (each with 8 A100 GPUs again), the training speed reduces to about 3 it/s. The 2 nodes are connected through a 100Gb/s NIC.

Upon further investigation using the pytroch profiler, it turns out the slowness is coming from the all reducing during sparse_data_dist:

Using single node, the sparse_data_dist takes 28ms: Screenshot 2023-09-11 at 5 18 25 PM

Using two nodes with TREE allreduce sparse_data_dist takes 246ms: Screenshot 2023-09-11 at 5 19 17 PM

Using two nodes with RING allreduce sparse_data_dist takes 254ms:: Screenshot 2023-09-11 at 5 20 51 PM

I wonder if there are any best practices to optimize the allreduce time with torchrec? As an example knob, horovod allows to treat sparse tensors as dense tensors during all reduce, since the number of NCCL CC calls are less when allreducing sparse tensors.

samiwilf commented 12 months ago

The slowdown you report seems reasonable considering that 8 A100 GPUs on a single computer are connected via NVLink or the PCIe bus, and both have considerably more bandwidth than a 100Gb/s NIC which you used for 2 nodes. One note I would like to add is that the training speed is calculated using the formula: average it/s local batch size number of GPUs across all nodes. So your speed drop in percentage is not 3/18 but 6/18.

I think you may be confusing allreduce and Torchrec's KJTAllToAll. Allreduce would be used in the context of averaging gradients where there is model replication (data parallelism). It's not used for model parallel parts such as sharded embedding tables, because there aren't any replicated trainable parameters to need their gradients to be averaged and made the same on all replicas.

Torchrec uses a sharding planner to pick the sharding plan it considers best. Regarding NCCL algorithms like Ring and Tree, the NCCL repo would be a better place to ask or read about the tradeoffs of different settings. I think prior issues may cover the topic, such as this issue.

MrAta commented 11 months ago

Hi @samiwilf, thanks for the insights.

The reason I mentioned allreduce is that when you zoom into the profile under sparse_data_dist name scope, there are a bunch of Allreduce calls:

Screenshot 2023-09-15 at 11 02 05 AM

Some Send/Recives can be seen as well, but I was wondering what are those allreduce calls?

Another question that I had regarding the sparse_data_dist was that based on the code, it seems that this part is the initial feature distribution among ranks for the next steps which is supposed to be pipelined. However, from the profiling results, it seems that it happens right between the forward pass and the backward pass of the current step which make things look sequential rather than pipeline: Screenshot 2023-09-25 at 10 49 57 AM

Am I missing something here? I would appreciate any insight on how to map the profile to the pipeline code?

samiwilf commented 11 months ago

I would recommend performing an ablation study of the code and profiling iteratively to pinpoint which parts of the model or training loop correspond to the parts of the profile in question. You could try switching to TorchRec's nightly build, since sparse_data_dist has been replaced with start_sparse_data_dist and wait_sparse_data_dist. It may provide a more granular and informative profile. Lastly, although TorchRec is used for the embedding tables, the bottom and top MLPs are still ordinary PyTorch DDP modules. They use allreduce for gradient averaging.