apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.77k stars 6.8k forks source link

Loss normalizer needs to be all-reduced for softmax layer in distributed training if normalization type is set to valid #12450

Open threeleafzerg opened 6 years ago

threeleafzerg commented 6 years ago

Description: We are currently enabling the multi-node for mxnet sockeye and found that currently if the normalization type is valid the loss normalizer for softmax is not correct in distributed training. (softmax_output-inl.h) The correct implementations should be: If gradients are all-reduced in sum mode, valid_cnt should be allreduced . grads = grads / valid_cnt. If gradients are all-reduced in average mode, valid_cnt should be allreduced too. grads = grads * node_num / valid_cnt. The main reason is that: In topology such as SSD (CNN) or NMT (RNN), there's different valid_cnt in different nodes.

vrakesh commented 6 years ago

Thank you for the suggestion, @threeleafzerg @mxnet-label-bot [Distributed]