Open daebakk opened 1 year ago
You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template. The following information is missing: "Instructions To Reproduce the Issue and Full Logs";
Hello
There is a problem that the training time is very slow when learning the model with detectron2 using two machines
I use A6000 RTX with 4 GPUs per node and train my models with the two nodes. Two nodes are on Ubuntu 20.04. Training is normally working and the log.txt file is also generated well.
I set the environment variables as follows
Node1 setting(189) export NCCL_DEBUG="INFO" export NCCL_SOCKET_IFNAME="enp36s0f1" export GLOO_SOCKET_IFNAME="enp36s0f1"
Node2 setting export NCCL_DEBUG="INFO" export NCCL_SOCKET_IFNAME="enp4s0" export GLOO_SOCKET_IFNAME="enp4s0"
First, when I only set NCCL environment variables (not set GLOO), I got these errors
After I set export GLOO_SOCKET_IFNAME="enp4s0" and export GLOO_SOCKET_IFNAME="enp36s0f1" respectively, The training worked but the time is too slow. This is my NCCL BUG Report
For the record, according to this guide https://pytorch.org/docs/stable/distributed.html, "If you encounter any problem with NCCL, use Gloo as the fallback option. (Note that Gloo currently runs slower than NCCL for GPUs." distributed_sampler in detectron2 uses gloo backend.
When I type this command python -c "import torch;print(torch.cuda.nccl.version())"(NCCL Version check in Conda virtual Enviroment) (2, 10, 3) for both two machines I additionally didn't install NCCL (only installed Pytorch) What should I do?