Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
TypeError: no implementation found for 'numpy.broadcast_arrays' on types that implement __array_function__: [<class 'numpy.ndarray'>, <class 'mxnet.numpy.ndarray'>]
chapter_preliminaries/probability.md
MXNetError: [09:16:23] src/operator/numpy/np_true_divide.cc:43: Check failed: lhs_dtype == rhs_dtype (7 vs. 0) : true_divide currently only supports same dtype for dividend and divisor
chapter_multilayer-perceptrons/dropout.md
mask = np.random.uniform(0, 1, X.shape) > drop_prob
return mask * X / (1.0-drop_prob)
Fails on the second line there, should be the problem with boolean multiplying with float
chapter_deep-learning-computation/parameters.md
data[:] = np.random.uniform(-10, 10, data.shape)
data *= np.abs(data) >= 5
Fails on the 2nd line, still the multiplication between boolean & float
chapter_recurrent-neural-networks/seq2seq.md
MXNetError: [10:46:26] src/operator/numpy/np_true_divide.cc:43: Check failed: lhs_dtype == rhs_dtype (0 vs. 6) : true_divide currently only supports same dtype for dividend and divisor<ipython-input-10-23a855fab898> in train_s2s_ch8(model, data_iter, lr, num_epochs, ctx) 24 metric.add(l.sum(), num_tokens) 25 if epoch % 10 == 0: ---> 26 animator.add(epoch, (metric[0]/metric[1],)) 27 print('loss %.3f, %d tokens/sec on %s ' % ( 28 metric[0]/metric[1], metric[1]/timer.stop(), ctx))
--
All notebooks with train_s2s_ch8
MXNetError: [10:46:26] src/operator/numpy/np_true_divide.cc:43: Check failed: lhs_dtype == rhs_dtype (0 vs. 6) : true_divide currently only supports same dtype for dividend and divisor
<ipython-input-10-23a855fab898> in train_s2s_ch8(model, data_iter, lr, num_epochs, ctx) 24 metric.add(l.sum(), num_tokens) 25 if epoch % 10 == 0: ---> 26 animator.add(epoch, (metric[0]/metric[1],)) 27 print('loss %.3f, %d tokens/sec on %s ' % ( 28 metric[0]/metric[1], metric[1]/timer.stop(), ctx))
chapter_optimization/optimization-intro.md
ValueError: mxnet.numpy operator `<function column_stack at 0x7ff910066e18>` has not been registered in the _NUMPY_ARRAY_FUNCTION_LIST. Please make sure you are using NumPy >= 1.17.0 and the operator implementation is compatible with NumPy. Then add the operator name to the list.
Issues from running D2L numpy2 branch
chapter_preliminaries/probability.md
chapter_multilayer-perceptrons/dropout.md
chapter_deep-learning-computation/parameters.md
chapter_recurrent-neural-networks/seq2seq.md
All notebooks with train_s2s_ch8
chapter_optimization/optimization-intro.md
chapter_optimization/convexity.md
chapter_natural-language-processing/sentiment-analysis-rnn.md
Note: Caused by #16391 (https://github.com/apache/incubator-mxnet/pull/16391), verified fix by reverting the PR. Also pinged DickJC123 for a fix
chapter_recommender-systems/autorec.md
Note: Verified fix by changing that type switch at src/operator/tensor/elemwise_binary_scalar_op.h:264 to the "WITH_BOOL" version
Missing Variables and Functions