apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.77k stars 6.8k forks source link

[bug] mxnet.ndarray.sparse.norm fallback regression in 1.5.0 and master #16060

Open yifeim opened 5 years ago

yifeim commented 5 years ago

Description

mxnet.ndarray.sparse.norm causes sparse fallback in CSRNDArray in 1.5.0 and master. Additionally, that the regression passed unit tests suggests deeper issues. For example, all sparse regression fallbacks happen silently in the background, instead of being surfaced to the caller. This makes it difficult to id the root cause.

Environment info (Required)

----------Python Info----------
('Version      :', '2.7.15')
('Compiler     :', 'GCC 7.3.0')
('Build        :', ('default', 'Feb 28 2019 04:00:11'))
('Arch         :', ('64bit', ''))
------------Pip Info-----------
('Version      :', '10.0.1')
('Directory    :', '/home/ec2-user/anaconda3/envs/mxnet_p27/lib/python2.7/site-packages/pip')
----------MXNet Info-----------
('Version      :', '1.6.0')
('Directory    :', '/home/ec2-user/anaconda3/envs/mxnet_p27/lib/python2.7/site-packages/mxnet')
('Commit Hash   :', '3f7b6ee57865b79634c82a8f58e3551fc95e4dda')
('Library      :', ['/home/ec2-user/anaconda3/envs/mxnet_p27/lib/python2.7/site-packages/mxnet/libmxnet.so'])
Build features:
✔ CUDA
✔ CUDNN
✔ NCCL
✖ CUDA_RTC
✖ TENSORRT
✔ CPU_SSE
✔ CPU_SSE2
✔ CPU_SSE3
✔ CPU_SSE4_1
✔ CPU_SSE4_2
✖ CPU_SSE4A
✔ CPU_AVX
✖ CPU_AVX2
✔ OPENMP
✖ SSE
✔ F16C
✖ JEMALLOC
✔ BLAS_OPEN
✖ BLAS_ATLAS
✖ BLAS_MKL
✖ BLAS_APPLE
✔ LAPACK
✔ MKLDNN
✔ OPENCV
✖ CAFFE
✖ PROFILER
✔ DIST_KVSTORE
✖ CXX14
✖ INT64_TENSOR_SIZE
✔ SIGNAL_HANDLER
✖ DEBUG
✖ TVM_OP
----------System Info----------
('Platform     :', 'Linux-4.14.133-88.112.amzn1.x86_64-x86_64-with-glibc2.2.5')
('system       :', 'Linux')
('node         :', 'ip-172-16-12-219')
('release      :', '4.14.133-88.112.amzn1.x86_64')
('version      :', '#1 SMP Tue Jul 30 21:21:30 UTC 2019')
----------Hardware Info----------
('machine      :', 'x86_64')
('processor    :', 'x86_64')
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping:              1
CPU MHz:               2709.117
BogoMIPS:              4600.14
Hypervisor vendor:     Xen
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0-31
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0016 sec, LOAD: 0.5764 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0019 sec, LOAD: 0.3843 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0115 sec, LOAD: 0.1455 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0112 sec, LOAD: 0.1902 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1908 sec, LOAD: 0.0881 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1898 sec, LOAD: 0.0980 sec.
----------Environment----------

Package used (Python/R/Scala/Julia): Python27 and Python36

Error Message:

[08:08:56] src/operator/contrib/../tensor/./../../common/utils.h:463:
Storage type fallback detected:
operator = norm
input storage types = [csr, ]
output storage types = [default, ]
params = {}
context.dev_mask = gpu
The operator with default storage type will be dispatched for execution. You're seeing this warning message because the operator above is unable to process the given ndarrays with specified storage types, context and parameter. Temporary dense ndarrays are generated in order to execute the operator. This does not affect the correctness of the programme. You can set environment variable MXNET_STORAGE_FALLBACK_LOG_VERBOSE to 0 to suppress this warning.
Out[3]:

[0.]
<NDArray 1 @gpu(0)>

Minimum reproducible example

import mxnet as mx
data = mx.nd.sparse.csr_matrix((3,4), ctx=mx.gpu())
data.norm()

Steps to reproduce

(Paste the commands you ran that produced the error.)

  1. The function works in mxnet-cu100mkl==1.4.1 (no warning is generated)
  2. The function fails in mxnet-cu100mkl==1.5.0 and nightly build.

What have you tried to solve it?

Downgrade 1. 2.

mxnet-label-bot commented 5 years ago

Hey, this is the MXNet Label Bot. Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it. Here are my recommended label(s): Bug

yifeim commented 5 years ago

@eric-haibin-lin

eric-haibin-lin commented 5 years ago

It looks like the regression happens around April 16th

➜  mxnet git:(take) ✗ pip install mxnet==1.5.0b20190417
Requirement already satisfied: mxnet==1.5.0b20190417 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (1.5.0b20190417)
Requirement already satisfied: numpy<1.15.0,>=1.8.2 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from mxnet==1.5.0b20190417) (1.14.6)
Requirement already satisfied: requests>=2.20.0 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from mxnet==1.5.0b20190417) (2.22.0)
Requirement already satisfied: graphviz<0.9.0,>=0.8.1 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from mxnet==1.5.0b20190417) (0.8.4)
Requirement already satisfied: idna<2.9,>=2.5 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190417) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190417) (2019.6.16)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190417) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190417) (1.24.2)
➜  mxnet git:(take) ✗ python test.py
[20:54:47] src/operator/contrib/../tensor/../../common/utils.h:450:
Storage type fallback detected:
operator = norm
input storage types = [row_sparse, ]
output storage types = [default, ]
params = {}
context.dev_mask = cpu
The operator with default storage type will be dispatched for execution. You're seeing this warning message because the operator above is unable to process the given ndarrays with specified storage types, context and parameter. Temporary dense ndarrays are generated in order to execute the operator. This does not affect the correctness of the programme. You can set environment variable MXNET_STORAGE_FALLBACK_LOG_VERBOSE to 0 to suppress this warning.

[2.]
<NDArray 1 @cpu(0)>
➜  mxnet git:(take) ✗ pip install mxnet==1.5.0b20190416
Collecting mxnet==1.5.0b20190416
  Using cached https://files.pythonhosted.org/packages/48/41/99ca13c3173c3631a024ace26e36baedf7d0810c0ac465f22cc2f0af2796/mxnet-1.5.0b20190416-cp37-cp37m-macosx_10_11_x86_64.whl
Requirement already satisfied: numpy<1.15.0,>=1.8.2 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from mxnet==1.5.0b20190416) (1.14.6)
Requirement already satisfied: graphviz<0.9.0,>=0.8.1 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from mxnet==1.5.0b20190416) (0.8.4)
Requirement already satisfied: requests>=2.20.0 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from mxnet==1.5.0b20190416) (2.22.0)
Requirement already satisfied: certifi>=2017.4.17 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190416) (2019.6.16)
Requirement already satisfied: idna<2.9,>=2.5 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190416) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190416) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Users/haibilin/miniconda3/lib/python3.7/site-packages (from requests>=2.20.0->mxnet==1.5.0b20190416) (1.24.2)
Installing collected packages: mxnet
  Found existing installation: mxnet 1.5.0b20190417
    Uninstalling mxnet-1.5.0b20190417:
      Successfully uninstalled mxnet-1.5.0b20190417
Successfully installed mxnet-1.5.0b20190416
➜  mxnet git:(take) ✗ python test.py

[2.]
<NDArray 1 @cpu(0)>
eric-haibin-lin commented 5 years ago

I did further bisecting and the error happens starting commit 3f3ba92ae1468d08de088d2291ca14e2d5dc5515 @reminisce Need to look into it a bit more

ChaiBapchya commented 5 years ago

@mxnet-label-bot add [Bug, Operator]