Closed peastman closed 5 years ago
Sounds like a good approach to me. Re: naming, my preference would be for inner
, outer
, and dot
, as mul
or 'multiplication' is an overloaded term.
Sounds good. I'll get started on it. I don't see any alternative to having 36 versions of each function for all possible combinations of input types. That's a bit awkward but I think it's necessary.
Closed in #96.
Following up on one of the issues discussed in #83, I've been thinking about how to implement inner and outer products for NDArrays. Numpy has a ton of different functions for variations on this:
dot()
,vdot()
,tensordot()
,inner()
,outer()
,matmul()
,einsum()
, etc. In my opinion that is kind of excessive. I suggest adding three functions which will cover most common cases.dot
computes the dot product of two arrays. The dot product is taken over all elements, so the arrays don't need to be 1D. The return value of this function is a scalar.Another function computes an inner product. This could be called
mul
,matmul
,inner
, or various other things. The product is taken over a single axis of each array and the return value is another array. By default the product is over the last axis of the first array, and the first axis of the second array. You can optionally specify different axes to perform it over. If the input arrays have N and M dimensions, respectively, the output has N+M-2 dimensions. If both inputs are 1D, this should throw an exception.The final function computes an outer product of two arrays. This would probably be called
outer
. If the input arrays have N and M dimensions, respectively, the output has N+M dimensions.All these functions should support infix notation.