Open leoxiaobin opened 6 years ago
We here at TuSimple also found this phenomenon. This is the bottleneck for large-scale training of DET models. We also found that change CPU_WORKER number does not alleviate this. A viable workaround is to rewrite memory-oriented operators in pure C++ by copying in_data from GPU and out_data back to GPU.
Now FExecType
of CustomOp is kLocal
https://github.com/apache/incubator-mxnet/blob/master/src/operator/custom/custom.cc#L404, which run on the scheduling thread without pushing to engine. Is this the reason why custom op cannot scale? Why kLocal
is used for CustomOp in https://github.com/apache/incubator-mxnet/pull/6928? @piiswrong
I have encountered the same problem when training object detection. Can you fix this bug? Or give any alternative solution? @piiswrong
I have encountered the same issue when doing a Seq2Seq model. Any solution/workarounds? @piiswrong
hi, @piiswrong @mli , it seems that many people encountered the same issue, is that a bug of mxnet?
I guess this is an artifact of GIL since the custom op code is in python. Design-wise, to circumvent the GIL one way I see is to parse the python construct and pass it to the backend, which is not easy to do.
For now, if you care about performance, please write the operator in c++ instead.
thanks, @szha , but the speed is normal using the same code before the pr #6928. Only after that pr, the issue happened.
FYI. A simple workaround for loss-type CustomOp is to comment out all calculations in forward and leave the assign only. This workaround gives a fully paralleled forward and an almost paralleled backward since the losses are firstly calculated during backward.
Recent pr makes the Ops before or after CustomOp run in parallel. But the CustomOp itself still runs sequentially. Anyone has clues why this happen?
entering 0: 1516522076.3805
exiting 0: 1516522076.6744
entering 1: 1516522076.6760
exiting 1: 1516522076.9477
entering 0: 1516522077.0904
exiting 0: 1516522077.3583
entering 1: 1516522077.3599
exiting 1: 1516522077.6237
entering 0: 1516522077.7664
exiting 0: 1516522078.0574
entering 1: 1516522078.0590
exiting 1: 1516522078.3297
A MCVE as following runs in a 2GPU setting:
import time
import mxnet as mx
import numpy as np
class DebugOperator(mx.operator.CustomOp):
def __init__(self, **kwargs):
super(DebugOperator, self).__init__()
self.pos = kwargs.get("pos", None)
def forward(self, is_train, req, in_data, out_data, aux):
print("entering %d: %.4f" % (in_data[0][0].context.device_id, time.time()))
time.sleep(0.1)
self.assign(out_data[0], req[0], 0)
print("exiting %d: %.4f" % (in_data[0][0].context.device_id, time.time()))
def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
self.assign(in_grad[0], req[0], 0)
@mx.operator.register("Debug")
class DebugProp(mx.operator.CustomOpProp):
def __init__(self, **kwargs):
super(DebugProp, self).__init__(need_top_grad=False)
self._kwargs = kwargs
def list_arguments(self):
return ['data']
def list_outputs(self):
return ['output']
def infer_shape(self, in_shape):
return in_shape, [(1, )]
def create_operator(self, ctx, shapes, dtypes):
return DebugOperator(**self._kwargs)
def get_symbol():
data = mx.sym.var("data")
label = mx.sym.var("softmax_label")
proj = mx.sym.FullyConnected(data, num_hidden=1)
debug = mx.sym.Custom(proj, op_type="Debug", name="debug")
return mx.sym.Group([debug, label])
if __name__ == "__main__":
gpus = [0, 1]
sym = get_symbol()
mod = mx.module.Module(sym, context=[mx.gpu(i) for i in gpus])
mod.bind(data_shapes=[("data", (len(gpus), 1))], label_shapes=[("softmax_label", (len(gpus), 1))])
data = mx.io.NDArrayIter(data=np.zeros((10000, 1)), label=np.zeros((10000, 1)), batch_size=len(gpus))
mod.fit(data, num_epoch=1, eval_metric=mx.metric.Loss(output_names=["debug_output"]))
outputs are:
entering 1: 1516523993.4081
exiting 1: 1516523993.5086
entering 0: 1516523993.5088
exiting 0: 1516523993.6092
entering 1: 1516523993.6362
exiting 1: 1516523993.7368
entering 0: 1516523993.7369
exiting 0: 1516523993.8373
entering 1: 1516523993.8394
exiting 1: 1516523993.9398
entering 0: 1516523993.9400
exiting 0: 1516523994.0404
entering 1: 1516523994.0634
exiting 1: 1516523994.1692
entering 0: 1516523994.1694
exiting 0: 1516523994.2698
entering 0: 1516523994.2750
exiting 0: 1516523994.3755
entering 1: 1516523994.3757
exiting 1: 1516523994.4761
entering 0: 1516523994.4873
exiting 0: 1516523994.5877
entering 1: 1516523994.5879
exiting 1: 1516523994.6883
entering 0: 1516523994.6943
exiting 0: 1516523994.7948
with the latest code on master, this problem still exists.
Proposed Labels : "Python", "Distributed", "Ubuntu"
Does this problem still exist? There is a recent PR about this problem https://github.com/apache/incubator-mxnet/pull/9283/files that has been merged.
@leoxiaobin Does this issue still exist?
The issue still exist in lastest version 1.3.1 or 1.5.0 in master @sxjscience @vandanavk . Is there any workaround other than writing C++ layers? @piiswrong
Description
I found after #6928 when you using numpy in custom operator, multi-gpus forwards is not running parallelly. But before #6928, multi-gpus forwards can run parallelly. We can use mxnet's image-classification example to reproduce it by replacing Softmax operator using custom's softmax.
Environment info
Package used (Python/R/Scala/Julia): Python
Build info (Required if built from source)
Compiler (gcc/clang/mingw/visual studio): gcc
MXNet commit hash: ed190957bb57abd29aca1d22d201a87fd871a272
Minimum reproducible example
The custom's softmax operator, just in order to reproduce this issue, so I did no implement the backward.
Steps to reproduce
Training speed is:
Training speed is:
What have you tried to solve it?
I have used the mxnet build-in profiler to find more detail about the execution time The original's version: using custom softmax's version: it can see, that when using custom operator, the forward procedures on multi-gpus are running sequentially not parallelly.
I have also tried mxnet's version before #6928, using custom softmax operator or not, the speed is almost the same. original training speed using mxnet before #6928
using custom softmax using mxnet before #6928