apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.77k stars 6.79k forks source link

Training crash SSD with LeakyReLU(rrelu) #12894

Open rayjs opened 5 years ago

rayjs commented 5 years ago

Description

Training SSD networks with LeakyReLU (rrelu) activation causes the training to crash. I have tried different networks and vgg16_reduced.py as well. It always crashes

Environment info (Required)

----------Python Info----------
('Version      :', '2.7.12')
('Compiler     :', 'GCC 5.4.0 20160609')
('Build        :', ('default', 'Dec  4 2017 14:50:18'))
('Arch         :', ('64bit', 'ELF'))
------------Pip Info-----------
('Version      :', '18.1')
('Directory    :', '/usr/local/lib/python2.7/dist-packages/pip')
----------MXNet Info-----------
('Version      :', '1.3.1')
('Directory    :', '/home/xx/Documents/extern_libs/incubator-mxnet/python/mxnet')
Hashtag not found. Not installed from pre-built package.
----------System Info----------
('Platform     :', 'Linux-4.4.0-137-generic-x86_64-with-Ubuntu-16.04-xenial')
('system       :', 'Linux')
('node         :', 'et2')
('release      :', '4.4.0-137-generic')
('version      :', '#163-Ubuntu SMP Mon Sep 24 13:14:43 UTC 2018')
----------Hardware Info----------
('machine      :', 'x86_64')
('processor    :', 'x86_64')
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 94
Model name:            Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz
Stepping:              3
CPU MHz:               3377.625
CPU max MHz:           3600.0000
CPU min MHz:           800.0000
BogoMIPS:              6383.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              6144K
NUMA node0 CPU(s):     0-3
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch invpcid_single intel_pt ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.1568 sec, LOAD: 1.4836 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.1446 sec, LOAD: 2.0502 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.1938 sec,LOAD: 1.3499 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.1528 sec, LOAD: 0.1908 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.2012 sec, LOAD: 0.0508 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.3913 sec, LOAD: 0.8876 sec.

Package used (Python/R/Scala/Julia): Python

Build info (Required if built from source)

Compiler (gcc/clang/mingw/visual studio): gcc

MXNet commit hash: 74638105f5480349cf57cda40a37475d626dbf41

Build config: make -j4 USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1

Error Message:

[16:24:25] src/io/iter_image_det_recordio.cc:283: ImageDetRecordIOParser: /home/xx/Documents/extern_libs/incubator-mxnet/example/ssd/data/train.rec, use 3 threads for decoding..
[16:24:26] src/io/iter_image_det_recordio.cc:340: ImageDetRecordIOParser: /home/xx/Documents/extern_libs/incubator-mxnet/example/ssd/data/train.rec, label padding width: 350
[16:24:26] src/io/iter_image_det_recordio.cc:283: ImageDetRecordIOParser: /home/xx/Documents/extern_libs/incubator-mxnet/example/ssd/data/val.rec, use 3 threads for decoding..
[16:24:26] src/io/iter_image_det_recordio.cc:340: ImageDetRecordIOParser: /home/xx/Documents/extern_libs/incubator-mxnet/example/ssd/data/val.rec, label padding width: 350
[<Symbol relu4_3>, <Symbol relu7>, <Symbol multi_feat_2_conv_3x3_relu>, <Symbol multi_feat_3_conv_3x3_relu>, <Symbol multi_feat_4_conv_3x3_relu>, <Symbol multi_feat_5_conv_3x3_relu>]
<Symbol relu4_3>
<Symbol relu7>
<Symbol multi_feat_2_conv_3x3_relu>
<Symbol multi_feat_3_conv_3x3_relu>
<Symbol multi_feat_4_conv_3x3_relu>
<Symbol multi_feat_5_conv_3x3_relu>
INFO:root:Experimental: start training from scratch with (gpu(0),gpu(1))
[16:24:32] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:109: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Traceback (most recent call last):
  File "train.py", line 156, in <module>
    voc07_metric=args.use_voc07_metric)
  File "/home/xx/Documents/extern_libs/incubator-mxnet/example/ssd/train/train_net.py", line 301, in train_net
    monitor=monitor)
  File "/home/xx/Documents/extern_libs/incubator-mxnet/python/mxnet/module/base_module.py", line 539, in fit
    self.update_metric(eval_metric, data_batch.label)
  File "/home/xx/Documents/extern_libs/incubator-mxnet/python/mxnet/module/module.py", line 773, in update_metric
    self._exec_group.update_metric(eval_metric, labels, pre_sliced)
  File "/home/xx/Documents/extern_libs/incubator-mxnet/python/mxnet/module/executor_group.py", line 639, in update_metric
    eval_metric.update_dict(labels_, preds)
  File "/home/xx/Documents/extern_libs/incubator-mxnet/python/mxnet/metric.py", line 132, in update_dict
    self.update(label, pred)
  File "/home/xx/Documents/extern_libs/incubator-mxnet/example/ssd/train/metric.py", line 48, in update
    cls_prob = preds[0].asnumpy()
  File "/home/xx/Documents/extern_libs/incubator-mxnet/python/mxnet/ndarray/ndarray.py", line 1980, in asnumpy
    ctypes.c_size_t(data.size)))
  File "/home/xx/Documents/extern_libs/incubator-mxnet/python/mxnet/base.py", line 252, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [16:24:32] include/mxnet/././resource.h:155: Check failed: req.type == ResourceRequest::kTempSpace (42955292 vs. 1)

Stack trace returned 10 entries:
[bt] (0) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x5b) [0x7f98868b8fdb]
[bt] (1) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(mshadow::Tensor<mshadow::gpu, 1, unsigned int> mxnet::Resource::get_space_typed<mshadow::gpu, 1, unsigned int>(mshadow::Shape<1>, mshadow::Stream<mshadow::gpu>*) const+0x6c5) [0x7f9889b02f35]
[bt] (2) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(mxnet::op::LeakyReLUOp<mshadow::gpu, float>::Forward(mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x5bc) [0x7f988b936e0c]
[bt] (3) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(mxnet::op::OperatorState::Forward(mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x363) [0x7f98892bdbc3]
[bt] (4) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(mxnet::exec::StatefulComputeExecutor::Run(mxnet::RunContext, bool)+0x59) [0x7f9889a0c829]
[bt] (5) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(+0x3e29526) [0x7f98899d7526]
[bt] (6) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext, mxnet::engine::OprBlock*)+0x8f5) [0x7f9889926045]
[bt] (7) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(void mxnet::engine::ThreadedEnginePerDevice::GPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context, bool, mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>*, std::shared_ptr<dmlc::ManualEvent> const&)+0xeb) [0x7f988993c78b]
[bt] (8) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(std::_Function_handler<void (std::shared_ptr<dmlc::ManualEvent>), mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, bool)::{lambda()#4}::operator()() const::{lambda(std::shared_ptr<dmlc::ManualEvent>)#1}>::_M_invoke(std::_Any_data const&, std::shared_ptr<dmlc::ManualEvent>&&)+0x4e) [0x7f988993c9fe]
[bt] (9) /home/xx/Documents/extern_libs/incubator-mxnet//lib/libmxnet.so(std::thread::_Impl<std::_Bind_simple<std::function<void (std::shared_ptr<dmlc::ManualEvent>)> (std::shared_ptr<dmlc::ManualEvent>)> >::_M_run()+0x4a) [0x7f988992563a]

Minimum reproducible example

In vgg16_reduced.py in example/ssd/symbol, make the following changes as shown below. relu1_1 = mx.symbol.LeakyReLU(data=conv1_1, act_type="rrelu", name="relu1_1") Replacing LeakyReLU with activations at other positions also causes the training to crash

Steps to reproduce

python train.py --gpus 0,1 --batch-size 32 --pretrained ''

What have you tried to solve it?

I have had to replace LeakyReLU(rrelu) with other activations to get around this issue.

lanking520 commented 5 years ago

@rayjs thanks for reporting this issue and glad to see you have a workaround for it. Looks like this one is a bug in the operator. But just in case, could you please make sure that you are using this operator correctly by referring to this documentation. Some of the args should be set or they go with default values.

@mxnet-label-bot please add [python, operator]

rayjs commented 5 years ago

@lanking520 Forrreluthe default parameters work so I am certain about the correct usage. leaky act_type works from what I have found but rrelu causes the training to crash. I have not checked the other act_type options in LeakyReLU other than these two

srochel commented 5 years ago

@mxnet-label-bot please add [bug]

andrewfayres commented 5 years ago

@mxnet-label-bot [bug]

stu1130 commented 5 years ago

Same as #14447 @mxnet-label-bot add [bug]