bytedance / byteps

A high performance and generic framework for distributed DNN training
Other
3.62k stars 487 forks source link

How did you get the horovod & bytePS performance #68

Open compete369 opened 5 years ago

compete369 commented 5 years ago

I have the same hardware envs, same network, but I could not get the result as you, almost half as you. Any best practices and experience? thanks very much! for bytePS with 1 instance and 8 GPU, I have similar testing result.

ymjiang commented 5 years ago

@compete369 For BytePS, can you please try export MXNET_OMP_MAX_THREADS=10 for the servers?

bobzhuyb commented 5 years ago

@compete369

There are a few things you can try. If any of the following works for you, please let us know. Though the following env may start with MXNET, they apply to any workers, TF/MXNet/PyTorch, because the parameter server is based on MXNet.

  1. For the parameter servers, set export MXNET_OMP_MAX_THREADS=10 if you have 16 CPU cores per server. Set export MXNET_OMP_MAX_THREADS=4 if you only have 8 CPU cores

  2. Set export MXNET_CPU_WORKER_NTHREADS=32 . This may speed up the parameter server

  3. Start more parameter server instances. For example, when you have two physical machines to run the servers, you can start 4 (DMLC_NUM_SERVER=4), two server instances per physical machine. This will increase the network bandwidth utilization especially when your single TCP flow cannot saturate your bandwidth.

compete369 commented 5 years ago

Hello, I followed your advice except "export MXNET_CPU_WORKER_NTHREADS=32", and got total 4605 imgs/sec with 8 GPUs + 64Core+256GB mem 2 workers, 16core + 16GB mem 4 servers. Thanks very much!

if including "export MXNET_CPU_WORKER_NTHREADS=32", the servers are going crazy, then I dropped it.

2 quick questions:

  1. when the test goes to the end, there is an exception Traceback (most recent call last): File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in raise Exception Traceback (most recent call last): File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in Exception raise Exception Exception Traceback (most recent call last): File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in raise Exception Traceback (most recent call last): File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in Exception raise Exception

Exception Traceback (most recent call last): File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in raise Exception Exception Traceback (most recent call last): File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in raise Exception Exception Traceback (most recent call last): File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in raise Exception Exception Traceback (most recent call last): Iter #99: 289.7 img/sec per GPU File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in raise Exception

  1. do you know how to analysis the NCCL Ring? I am wondering whether the ring takes use of NVlink correctly? worker-pytorch-0:45:45 [7] NCCL INFO Ring 00 : 3[7] -> 0[4] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 00 : 1[5] -> 2[6] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 00 : 2[6] -> 3[7] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 00 : 0[4] -> 1[5] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 01 : 0[4] -> 2[6] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 01 : 1[5] -> 3[7] via P2P/IPC worker-pytorch-0:45:45 [7] NCCL INFO Ring 01 : 3[7] -> 0[4] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 01 : 2[6] -> 1[5] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 02 : 0[4] -> 3[7] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 02 : 2[6] -> 0[4] via P2P/IPC worker-pytorch-0:45:45 [7] NCCL INFO Ring 02 : 3[7] -> 1[5] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 02 : 1[5] -> 2[6] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 03 : 2[6] -> 1[5] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 03 : 1[5] -> 0[4] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 03 : 0[4] -> 3[7] via P2P/IPC worker-pytorch-0:45:45 [7] NCCL INFO Ring 03 : 3[7] -> 2[6] via P2P/IPC worker-pytorch-0:45:45 [7] NCCL INFO Ring 04 : 3[7] -> 0[4] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 04 : 1[5] -> 2[6] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 04 : 0[4] -> 1[5] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 04 : 2[6] -> 3[7] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 05 : 2[6] -> 1[5] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 05 : 0[4] -> 2[6] via P2P/IPC worker-pytorch-0:45:45 [7] NCCL INFO Ring 05 : 3[7] -> 0[4] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 05 : 1[5] -> 3[7] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 06 : 1[5] -> 2[6] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 06 : 0[4] -> 3[7] via P2P/IPC worker-pytorch-0:45:45 [7] NCCL INFO Ring 06 : 3[7] -> 1[5] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 06 : 2[6] -> 0[4] via P2P/IPC worker-pytorch-0:46:46 [6] NCCL INFO Ring 07 : 2[6] -> 1[5] via P2P/IPC worker-pytorch-0:45:45 [7] NCCL INFO Ring 07 : 3[7] -> 2[6] via P2P/IPC worker-pytorch-0:42:42 [5] NCCL INFO Ring 07 : 1[5] -> 0[4] via P2P/IPC worker-pytorch-0:41:41 [4] NCCL INFO Ring 07 : 0[4] -> 3[7] via P2P/IPC
ymjiang commented 5 years ago

@compete369 Good to know you get performance improvement.

  1. We fixed the exception in https://github.com/bytedance/byteps/commit/b825042a29d75fe58e30636c738113fabffe41bd. The code in our docker images are stale though. We will update the images. For now you can manually update the code and rebuild.

  2. Perhaps take a look at nvidia-smi nvlink -sc. This might be helpful.

    if including "export MXNET_CPU_WORKER_NTHREADS=32", the servers are going crazy, then I dropped it.

Besides, what does "crazy" mean? Does it mean bad performance?

compete369 commented 5 years ago

crazy: CPU goes very high, about 100%, but the GPU utilization goes down to 2-10%. Thanks very much for your instruction.

ymjiang commented 5 years ago

@compete369 That sounds like your cpu becomes the bottleneck. Perhaps you can reduce MXNET_CPU_WORKER_NTHREADS to 16 or even smaller value. It requires some tuning.

spgeaney113 commented 5 years ago

Doesn't really help me much

Bama4542 commented 5 years ago

Hasn't really done much good for me either it's got quite a few bugs to work out!!! Did you like the way it's setup?

On Wed, Jul 24, 2019, 12:42 AM spgeaney113 notifications@github.com wrote:

Doesn't really help me much

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bytedance/byteps/issues/68?email_source=notifications&email_token=AMWHL3KOA3CDLJJWWPI2FKLQA7TT5A5CNFSM4IE2SFF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2VHN6Y#issuecomment-514488059, or mute the thread https://github.com/notifications/unsubscribe-auth/AMWHL3PFEZ7FPUHBQHCU73LQA7TT5ANCNFSM4IE2SFFQ .

bobzhuyb commented 5 years ago

@spgeaney113 @Bama4542 If you have specific questions, please open new issues. You are only spamming this thread now.

compete369 commented 5 years ago

How did you test the performance report on the main page? synthetic or real imagenet on the NAS. I tested the horovod, with 32 GPUs, the performance dropped 20%(8300 -> 6477)

ymjiang commented 5 years ago

@compete369 We used synthetic data in the performance report.

compete369 commented 5 years ago

Could share which public cloud you relied on if possible? Just curious about the good network stableness and performance. Thanks!