Open compete369 opened 5 years ago
@compete369 For BytePS, can you please try export MXNET_OMP_MAX_THREADS=10
for the servers?
@compete369
There are a few things you can try. If any of the following works for you, please let us know. Though the following env may start with MXNET
, they apply to any workers, TF/MXNet/PyTorch, because the parameter server is based on MXNet.
For the parameter servers, set export MXNET_OMP_MAX_THREADS=10
if you have 16 CPU cores per server. Set export MXNET_OMP_MAX_THREADS=4
if you only have 8 CPU cores
Set export MXNET_CPU_WORKER_NTHREADS=32
. This may speed up the parameter server
Start more parameter server instances. For example, when you have two physical machines to run the servers, you can start 4 (DMLC_NUM_SERVER=4
), two server instances per physical machine. This will increase the network bandwidth utilization especially when your single TCP flow cannot saturate your bandwidth.
Hello, I followed your advice except "export MXNET_CPU_WORKER_NTHREADS=32", and got total 4605 imgs/sec with 8 GPUs + 64Core+256GB mem 2 workers, 16core + 16GB mem 4 servers. Thanks very much!
if including "export MXNET_CPU_WORKER_NTHREADS=32", the servers are going crazy, then I dropped it.
2 quick questions:
Exception
Traceback (most recent call last):
File "/usr/local/byteps/example/pytorch/benchmark_byteps.py", line 132, in
@compete369 Good to know you get performance improvement.
We fixed the exception in https://github.com/bytedance/byteps/commit/b825042a29d75fe58e30636c738113fabffe41bd. The code in our docker images are stale though. We will update the images. For now you can manually update the code and rebuild.
nvidia-smi nvlink -sc
. This might be helpful.if including "export MXNET_CPU_WORKER_NTHREADS=32", the servers are going crazy, then I dropped it.
Besides, what does "crazy" mean? Does it mean bad performance?
crazy: CPU goes very high, about 100%, but the GPU utilization goes down to 2-10%. Thanks very much for your instruction.
@compete369 That sounds like your cpu becomes the bottleneck. Perhaps you can reduce MXNET_CPU_WORKER_NTHREADS
to 16 or even smaller value. It requires some tuning.
Doesn't really help me much
Hasn't really done much good for me either it's got quite a few bugs to work out!!! Did you like the way it's setup?
On Wed, Jul 24, 2019, 12:42 AM spgeaney113 notifications@github.com wrote:
Doesn't really help me much
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bytedance/byteps/issues/68?email_source=notifications&email_token=AMWHL3KOA3CDLJJWWPI2FKLQA7TT5A5CNFSM4IE2SFF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2VHN6Y#issuecomment-514488059, or mute the thread https://github.com/notifications/unsubscribe-auth/AMWHL3PFEZ7FPUHBQHCU73LQA7TT5ANCNFSM4IE2SFFQ .
@spgeaney113 @Bama4542 If you have specific questions, please open new issues. You are only spamming this thread now.
How did you test the performance report on the main page? synthetic or real imagenet on the NAS. I tested the horovod, with 32 GPUs, the performance dropped 20%(8300 -> 6477)
@compete369 We used synthetic data in the performance report.
Could share which public cloud you relied on if possible? Just curious about the good network stableness and performance. Thanks!
I have the same hardware envs, same network, but I could not get the result as you, almost half as you. Any best practices and experience? thanks very much! for bytePS with 1 instance and 8 GPU, I have similar testing result.