apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.78k stars 6.79k forks source link

Feature request: square op for row_sparse and sum op with keepdims #10786

Open yifeim opened 6 years ago

yifeim commented 6 years ago

Description

Square on row_sparse throws a storage fallback warning message. Sparse sum with keepdims also throws a similar warning message. While I need both functions to write factorization machines, it appears that there is an example that circumvents both obstacles with an internal function:

https://github.com/apache/incubator-mxnet/blob/14206978f461364c53aaf1c787e2f268e2a94b00/example/sparse/factorization_machine/model.py#L38

Still, it would be nice to expose these functionalities separately for general usability.

Environment info (Required)

----------Python Info---------- Version : 3.6.4 Compiler : GCC 7.2.0 Build : ('default', 'Mar 13 2018 01:15:57') Arch : ('64bit', '') ------------Pip Info----------- Version : 9.0.1 Directory : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip ----------MXNet Info----------- Version : 1.1.0 Directory : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet Commit Hash : 07a83a0325a3d782513a04f47d711710972cb144 ----------System Info---------- Platform : Linux-4.4.0-1055-aws-x86_64-with-debian-stretch-sid system : Linux node : ip-172-31-35-190 release : 4.4.0-1055-aws version : #64-Ubuntu SMP Thu Apr 5 17:06:36 UTC 2018 ----------Hardware Info---------- machine : x86_64 processor : x86_64 ----------Network Test---------- Setting timeout: 10 Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0017 sec, LOAD: 0.5422 sec. Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.2314 sec, LOAD: 0.0510 sec. Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0396 sec, LOAD: 0.1310 sec. Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0176 sec, LOAD: 0.0979 sec. Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0023 sec, LOAD: 0.6010 sec. Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0096 sec, LOAD: 0.0716 sec.

Package used (Python/R/Scala/Julia): I am using python.

Error Message:

Storage type fallback detected:
operator = _backward_square
input storage types = [row_sparse, row_sparse, ]
output storage types = [row_sparse, ]
params = {}
context.dev_mask = gpu
The operator with default storage type will be dispatched for execution. You're seeing this warning message because the operator above is unable to process the given ndarrays with specified storage types, context and parameter. Temporary dense ndarrays are generated in order to execute the operator. You can set environment variable MXNET_STORAGE_FALLBACK_LOG_VERBOSE to 0 to suppress this warning.

Minimum reproducible example

This is a similar factorization machines model

import numpy as np, scipy.sparse as ss
import mxnet as mx

import logging, imp
imp.reload(logging)
logging.basicConfig(level=logging.INFO)

data = ss.csr_matrix(np.random.rand(100,1000), dtype='float32')
label = np.random.rand(100,1)>0.5
data_iter = mx.io.NDArrayIter(
    mx.nd.sparse.csr_matrix(data),
    label,
    batch_size=100,last_batch_handle='discard')

w0 = mx.sym.var('w0_bias', shape=(1,), wd_mult=0.,)
w  = mx.sym.var('w_weight', shape=(1000, 1), stype='row_sparse', wd_mult=1e-10)
V  = mx.sym.var('V_weight', shape=(1000, 2), stype='row_sparse', wd_mult=1.)
X  = mx.sym.var('data', stype='csr')
fm = mx.sym.broadcast_add(
    mx.sym.sparse.dot(X, w)
    + 0.5 * mx.sym.sparse.dot(X, V).square().sum(axis=1, keepdims=True)
    - 0.5 * mx.sym.sparse.dot(X.square(), V.square()).sum(axis=1, keepdims=True),
    w0,
    name='fm',
)
obj = mx.sym.LogisticRegressionOutput(fm, mx.sym.var('softmax_label'))

mod = mx.mod.Module(obj, context=mx.gpu())
mod.fit(data_iter, num_epoch=10)
np.fabs(mod.predict(data_iter).asnumpy()-label).mean()

Steps to reproduce

  1. run in environment mxnet_p36

What have you tried to solve it?

  1. use the given example in

https://github.com/apache/incubator-mxnet/blob/14206978f461364c53aaf1c787e2f268e2a94b00/example/sparse/factorization_machine/model.py#L38

roywei commented 6 years ago

@eric-haibin-lin could you help to add label feature request and sparse? Thanks!

haojin2 commented 6 years ago

@yifeim Working on the backward for square(rsp) now.

yifeim commented 6 years ago

@haojin2 Thanks a lot! Please let me know when it is done and I will be happy to test it out with the previous examples.