Closed CaRRotOne closed 2 years ago
By default bagua-net is not enabled: https://github.com/BaguaSys/bagua/blob/master/bagua/distributed/run.py#L394
By default bagua-net is not enabled: https://github.com/BaguaSys/bagua/blob/master/bagua/distributed/run.py#L394 @NOBLES5E I did a test, the throughput is same Whether or not to pass the --enable_bagua_net argument .
Try following https://tutorials.baguasys.com/faq_troubleshooting#nccl-warn-reduce-invalid-reduction-operation-4 to install dependencies and run with NCCL_DEBUG=info
to see if the bagua net libnccl-net plugin is correctly found (the actual library should be in ~/.data/bagua-net/libnccl-net.so
).
Keep in mind that benchmarks are highly dependent on many factors including the workload and environment. It seems that in your screenshot you are running the workload on a single machine, where it is expected that bagua-net shows no benefit (on a single machine nccl will use shared memory/nvlink directly, while bagua-net optimizes nccl TCP performance).
@NOBLES5E thanks for your remind. so bagua-net only optimizes nccl TCP permormance. As for RDMA, would you do some performance optimization in the feature?
There's no transport level optimization for RDMA supported yet.
@NOBLES5E i also try to using google/nccl-fastsocket & buagua-net to do a all-reduce performance test on 2 nodes with 4 gpus on each nodes, but the result is also the same as usual, nothing changed. you just mentioned benchmarks are highly dependent on many factors including the workload and environment, so , could you please list some important factors which will effect the results.here is my env:
The same question was answered in https://github.com/google/nccl-fastsocket/issues/2
It seems that your 10Gbps NIC is already saturated. In this case, the compression algorithms in bagua (for example https://tutorials.baguasys.com/algorithms/bytegrad) may help you better.
@
The same question was answered in google/nccl-fastsocket#2
It seems that your 10Gbps NIC is already saturated. In this case, the compression algorithms in bagua (for example https://tutorials.baguasys.com/algorithms/bytegrad) may help you better. oh, i just forgot comparing to the nic max bandwidth, i will using higher bandwidth nic to rerun the tests. thx
Hi, I want to know if I could turn off bagua-net in this script. so that I could compare with the original pytorch throughput . passing the --enable_bagua_net argument in bagua.distributed.launch does not work.