apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.78k stars 6.79k forks source link

mx.nd._internal._mul_scalar and other scalar operators give incorrect results when PySide2 is used #17140

Closed nickguletskii closed 4 years ago

nickguletskii commented 4 years ago

Description

Creating an instance of PySide2.QtCore.QCoreApplication makes mx.nd._internal._mul_scalar and other scalar operators function incorrectly.

To Reproduce

import argparse

import mxnet as mx
import numpy as np

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Demonstrate an MXNet incompatibility with PySide2.")
    parser.add_argument("--import-pyside", dest="import_pyside", action="store_true", default=False)
    parser.add_argument("--create-app", dest="create_app", action="store_true", default=False)
    parser.add_argument("--scalar", type=float, default=0.6)
    args = parser.parse_args()
    if args.import_pyside:
        from PySide2.QtCore import QCoreApplication

        if args.create_app:
            app = QCoreApplication()

    tensor = mx.nd.full(shape=(375, 600, 3), val=255).astype(np.int32)
    tensor = tensor.astype(np.float32)
    scalar = args.scalar
    print("All of the following operations should yield the same result:")
    print("Multiply numpy ndarray by scalar: ", (tensor.asnumpy() * scalar).max())
    print("Multiply MXNet NDArray by MXNet NDArray: ", (tensor * mx.nd.array([[[scalar]]])).max().asscalar())
    print("Multiply MXNet NDArray by float: ", (tensor * scalar).max().asscalar())

Steps to reproduce

(Paste the commands you ran that produced the error.)

  1. Install a Python 3.7.2 virtual environment with PySide2 and MXNet.
  2. Run the provided script without any arguments. You should see the following results:
    All of the following operations should yield the same result:
    Multiply numpy ndarray by scalar:  153.0
    Multiply MXNet NDArray by MXNet NDArray:  153.0
    Multiply MXNet NDArray by float:  153.0
  3. Run the provided script with --import-pyside. You should observe the same results.
  4. Run the provided script with --import-pyside --create-app. The following output will be printed into the console:
    All of the following operations should yield the same result:
    Multiply numpy ndarray by scalar:  153.0
    Multiply MXNet NDArray by MXNet NDArray:  153.0
    Multiply MXNet NDArray by float:  0.0
  5. Run the provided script with --import-pyside --create-app --scalar 1.6.The following output will be printed into the console:
    All of the following operations should yield the same result:
    Multiply numpy ndarray by scalar:  408.0
    Multiply MXNet NDArray by MXNet NDArray:  408.0
    Multiply MXNet NDArray by float:  255.0

This suggests that the float scalar is being rounded down to an integer.

What have you tried to solve it?

  1. Using the naive engine, disabling op fusion, common expression elimination, and inplace optimizations did not solve the issue.

Environment

We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:

----------Python Info----------
Version      : 3.7.2
Compiler     : GCC 7.4.0
Build        : ('default', 'Oct 23 2019 22:40:04')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 18.1
Directory    : /home/nick/.pyenv/versions/mxnet-dev/lib/python3.7/site-packages/pip
----------MXNet Info-----------
Version      : 1.6.0
Directory    : /home/nick/workspaces/3rdparty/incubator-mxnet/python/mxnet
Num GPUs     : 1
Hashtag not found. Not installed from pre-built package.
----------System Info----------
Platform     : Linux-5.0.0-37-generic-x86_64-with-debian-buster-sid
system       : Linux
node         : REDACTED
release      : 5.0.0-37-generic
version      : #40~18.04.1-Ubuntu SMP Thu Nov 14 12:06:39 UTC 2019
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               42
Model name:          Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz
Stepping:            7
CPU MHz:             2314.518
CPU max MHz:         3700,0000
CPU min MHz:         1600,0000
BogoMIPS:            6585.04
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            6144K
NUMA node0 CPU(s):   0-3
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0069 sec, LOAD: 0.8626 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0005 sec, LOAD: 0.5098 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.1136 sec, LOAD: 2.0384 sec.
Timing for D2L: http://d2l.ai, DNS: 0.0912 sec, LOAD: 1.0025 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0514 sec, LOAD: 1.1740 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.1106 sec, LOAD: 0.7961 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0067 sec, LOAD: 0.8546 sec.
nickguletskii commented 4 years ago

Seems to be the same issue as #16134. Closing.