Open tejasnagendra opened 7 months ago
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
Created a new class called
MultiBlock
, which wraps around multipleBlock
to reduce the number of NCCL communication. Number of blocks to combine can be controlled withnum_blocks_to_combine
parameter GPT/Llama.Ideally the message size should be around 1GB to get the best performance. But when models are running on multiple nodes every layer is split into really small chunks causing the message size to be extremely small resulting in suboptimal usage of network bandwidth. This parameter can be controlled to make sure we send larger message sizes.