apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.78k stars 6.79k forks source link

Numpy array interface does not promote bool/int types when multiplied by scalar float #19891

Open aarmey opened 3 years ago

aarmey commented 3 years ago

Description

Multiplying a bool/int-typed array with a scalar float promotes the output type to a float in Numpy. I believe this also happens using the older MXNet interface. However, MXNet.numpy arrays retain their old type.

Error Message

There is no error message, just inconsistent behavior compared to Numpy.

To Reproduce

import mxnet as mx
from mxnet import numpy as np

A = np.array([0, 1, 0], dtype=np.bool)
B = np.array([0, 1, 2], dtype=np.int64)
C = np.array([0, 1, 2], dtype=np.float64)

print(A * 3.9)
print(B * 3.9)
print(C * 3.9)

Outputs:

[0 3 0]
[0 3 6]
[0.  3.9 7.8]

Environment

Environment Information ----------Python Info---------- Version : 3.8.6 Compiler : GCC 9.3.0 Build : ('default', 'Oct 26 2020 14:01:59') Arch : ('64bit', 'ELF') ------------Pip Info----------- Version : 21.0.1 Directory : /usr/local/lib/python3.8/site-packages/pip ----------MXNet Info----------- Version : 1.7.0 Directory : /usr/local/lib/python3.8/site-packages/mxnet Commit Hash : 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a 64f737cdd59fe88d2c5b479f25d011c5156b6a8a Library : ['/usr/local/lib/python3.8/site-packages/mxnet/libmxnet.so'] Build features: ✖ CUDA ✖ CUDNN ✖ NCCL ✖ CUDA_RTC ✖ TENSORRT ✔ CPU_SSE ✔ CPU_SSE2 ✔ CPU_SSE3 ✔ CPU_SSE4_1 ✔ CPU_SSE4_2 ✖ CPU_SSE4A ✔ CPU_AVX ✖ CPU_AVX2 ✔ OPENMP ✖ SSE ✔ F16C ✖ JEMALLOC ✔ BLAS_OPEN ✖ BLAS_ATLAS ✖ BLAS_MKL ✖ BLAS_APPLE ✔ LAPACK ✔ MKLDNN ✔ OPENCV ✖ CAFFE ✖ PROFILER ✔ DIST_KVSTORE ✖ CXX14 ✖ INT64_TENSOR_SIZE ✔ SIGNAL_HANDLER ✖ DEBUG ✖ TVM_OP ----------System Info---------- Platform : Linux-5.8.0-41-generic-x86_64-with-glibc2.2.5 system : Linux node : aretha release : 5.8.0-41-generic version : #46-Ubuntu SMP Mon Jan 18 16:48:44 UTC 2021 ----------Hardware Info---------- machine : x86_64 processor : x86_64 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 1 Model name: AMD Ryzen Threadripper 1950X 16-Core Processor Stepping: 1 Frequency boost: enabled CPU MHz: 2561.528 CPU max MHz: 3400.0000 CPU min MHz: 2200.0000 BogoMIPS: 6786.49 Virtualization: AMD-V L1d cache: 512 KiB L1i cache: 1 MiB L2 cache: 8 MiB L3 cache: 32 MiB NUMA node0 CPU(s): 0-7,16-23 NUMA node1 CPU(s): 8-15,24-31 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Full AMD retpoline, STIBP disabled, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnow prefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme ssbd sev vmmcall fsgsba se bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_re cov succor smca ----------Network Test---------- Setting timeout: 10 Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0054 sec, LOAD: 0.4757 sec. Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.2448 sec, LOAD: 0.2354 sec. Error open Gluon Tutorial(cn): https://zh.gluon.ai, , DNS finished in 0.970501184463501 sec. Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0352 sec, LOAD: 0.0971 sec. Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0439 sec, LOAD: 0.3022 sec. Error open Conda: https://repo.continuum.io/pkgs/free/, HTTP Error 403: Forbidden, DNS finished in 0.16263580322265625 sec. ----------Environment---------- KMP_DUPLICATE_LIB_OK="True" KMP_INIT_AT_FORK="FALSE"
github-actions[bot] commented 3 years ago

Welcome to Apache MXNet (incubating)! We are on a mission to democratize AI, and we are glad that you are contributing to it by opening this issue. Please make sure to include all the relevant context, and one of the @apache/mxnet-committers will be here shortly. If you are interested in contributing to our project, let us know! Also, be sure to check out our guide on contributing to MXNet and our development guides wiki.

leezu commented 3 years ago

Hi Aaron, the Numpy interface in v1.x is experimental and contains some issues. I confirmed the bug isn't present in MXNet 2. I think you currently have some workarounds for this bug on v1.x. I recommend you keep them for now and the issue will be resolved once v2 is stable

aarmey commented 3 years ago

Great, thanks @leezu.