Closed jiahao closed 7 years ago
For example, vector'vector yielding a vector (#2472, JuliaLang/LinearAlgebra.jl#8), vector' yielding a matrix, and vector'' yielding a matrix (#2686) are all bad mathematics.
The double-dual of a finite dimensional vector space is isomorphic to it, not identical. So I'm not clear on how this is bad mathematics. It's more that we tend gloss over the distinction between things that are isomorphic in mathematics, because human brains are good at handling that sort of slippery ambiguity and just doing the right thing. That said, I agree that this should be improved, but not because it's mathematically incorrect, but rather because it's annoying.
How can v' == v
, but v'*v != v*v
? Does it make more sense than we thought for x' * y
to be its own operator?
The double-dual of a finite dimensional vector space is isomorphic to it, not identical.
(Speaking as myself now) It is not just isomorphic, it is naturally isomorphic, i.e. the isomorphism is independent of the choice of basis. I can’t think of a practical application for which it would be worth distinguishing between this kind of isomorphism and an identity. IMO the annoyance factor comes from making this kind of distinction.
Does it make more sense than we thought for
x' * y
to be its own operator?
That was the impression I got out of this afternoon’s discussion with @alanedelman.
I think what Jeff is asking is right on the money ... it is starting to look like x'_y and x_y' is making more sense than ever.
I'm in violent agreement with @stefan. Bad mathematics was not meant to mean, wrong mathematics, it was meant to mean annoying mathematics. There are lots of things that are technically right, but not very nice ....
If we go with this logic, here are two choices we have
x_x remains an error ..... perhaps with a suggestion "maybe you want to use dot" or x_x is the dot product (I don't love that choice)
If x
and x'
are the same thing, then if you want (x')*y
to mean dot(x,y)
that implies that x*y
is also dot(x,y)
. There's no way out of that. We could make x'y
and x'*y
a different operation, but I'm not sure that's a great idea. People want to be able to parenthesize this in the obvious way and have it still work.
I would point out that if we allow x*x
to mean the dot product, there's basically no going back. That is going to get put into people's code everywhere and eradicating it will be a nightmare. So, natural isomorphism or not, this ain't pure mathematics and we've got to deal with the fact that different things in a computer are different.
Here is a practical discussion of distinguishing "up tuples" and "down tuples" that I like:
http://mitpress.mit.edu/sites/default/files/titles/content/sicm/book-Z-H-79.html#%_idx_3310
It carefully avoids words like "vector" and "dual", perhaps to avoid annoying people. I find the application to partial derivatives compelling though:
http://mitpress.mit.edu/sites/default/files/titles/content/sicm/book-Z-H-79.html#%_sec_Temp_453
Another reason to distinguish M[1,:]
and M[:,1]
is that currently our broadcasting behavior allows this very convenient behavior: M./sum(M,1)
is column-stochastic and M./sum(M,2)
is row-stochastic. The same could be done for normalization if we "fixed" the norm
function to allow application over rows and columns easily. Of course, we could still have sum(M,1)
and sum(M,2)
return matrices instead of up and down vectors, but that seems a bit off.
I like the idea of up and down vectors. The trouble is generalizing it to higher dimensions in a way that's not completely insane. Or you can just make vectors a special case. But that feels wrong too.
It's true that up/down may be a separate theory. The approach to generalizing them seems to be nested structure, which takes things in a different direction. Very likely there is a reason they don't call them vectors.
Also, x*y = dot(x,y)
would make *
non-associative, as in x*(y*z)
vs. (x*y)*z
. I really hope that we can avoid that.
Yes. To me, that's completely unacceptable. I mean technically, floating-point *
is non-associative, but it's nearly associative, whereas this would just be flagrantly non-associative.
We all agree that x*x
should not be the dot product.
The question remains whether we can think of v'w
and v'*w
as the dot product --
I really really like this working that way.
@JeffBezanson and I were chatting
A proposal is the following:
v'
is an error for vectors (This is what mathematica does)
v'w
and v'*w
is a dot product (result = scalar)
v*w
is an outer product matrix (result = matrix)
There is no distinction between rows and column vectors. I liked this anyway and was happy to see mathematica's precedent From mathematica: http://reference.wolfram.com/mathematica/tutorial/VectorsAndMatrices.html Because of the way Mathematica uses lists to represent vectors and matrices, you never have to distinguish between "row" and "column" vectors
Users have to be aware that there are no row vectors .... period.
Thus if M
is a matrix
M[1,:]*v
is an error ..... (assuming we go with M[1,:]
is a scalar
The warning could suggest trying dot
or '*
or M[i:i,:]
M[[1],:]*v
or M[1:1,:]*v
is a vector of length 1 (This is julia's current behavior anyway)
Regarding the closely related issue in https://groups.google.com/forum/#!topic/julia-users/L3vPeZ7kews
Mathematica compresses scalar-like array sections:
m = Array[a, {2, 2, 2}]
Out[49]= {{{a[1, 1, 1], a[1, 1, 2]}, {a[1, 2, 1],
a[1, 2, 2]}}, {{a[2, 1, 1], a[2, 1, 2]}, {a[2, 2, 1], a[2, 2, 2]}}}
In[123]:= Dimensions[m]
Dimensions[m[[All, 1, All]]]
Dimensions[m[[2, 1, All]]]
Dimensions[m[[2, 1 ;; 1, All]]]
Out[123]= {2, 2, 2}
Out[124]= {2, 2}
Out[125]= {2}
Out[126]= {1, 2}
[Edit: code formatting – @StefanKarpinski]
@alanedelman
assuming we go with M[1,:] is a scalar
do you mean M[1,:] is just a vector?
Yes, sorry. My mind meant M[1,:] was processing the scalar 1 :-)
Mathematica uses the period .
rather than the asterisk *
and then goes the whole 9 yards and makes (vector . vector) into a scalar, exactly what we are avoiding
with the asterisk.
There are no doubt many problems with the period, one of which is that it just doesn't look like the "dot" in a dot product, and another of which is that it clashes with the "pointwise op" reading of dot,
Unicode does provide very nicely a character named "the dot operator"
⋅
(char(8901))
which we can imagine offering
so we could have (v ⋅ w)
becoming synonymous with (v'*w)
In summary a current proposal subject for debate is
A[i,:]
is a vector as is A[:,i,j]
A[ i:i , : ]
or A[ [i], : ]
returns a matrix with one rowv'w
or v'*w
is the dot product for vectors (similarly v*w'
for outer product)v'
is undefined for vectors (point the user to permutedims(v,1)
????)v*A
returns a vector if A is a matrixv⋅w
also returns the dot product (but does not go as far as mathematica's .
by working on matricesv*w
is undefined for vectors but a warning might tip the user off with good suggestions including
⋅
Consequences are that
a. if you stick to all vectors being column vectors, everything works
b. if you make everything a matrix, everything certainly works, and it's easy to make everything a matrix
c. if your mind can't distinguish a row vector from a one row matrix, chances are you'll be educated
politely and gracefully
d. This dot notation ⋅
is kind of pleasant to the eye
Suggestion 5) looks very odd to me. I prefer v'*A
so that it is explicit that you are using the dual vector. This is particularly important in complex vector spaces where the dual isn't just a "shape" transformation.
I want to echo @StefanKarpinski that it would be rather unfortunate to lose our concise broadcasting behavior in all this. After this change, what is the concise syntax for taking a vector v
and normalizing the columns of matrix A
by those values? Currently, one can use A ./ v'
. This is extremely nice for data manipulation.
Good questions
My scheme does not preclude v'*A
taking the complex conjugate of v and mulitiplying by A
and all the various other cases that I didn't yet explicitly mention, but readily could
we could eliminate 5 perhaps that's desirable it doesn't conform to my column vector rule
This approach to broadcasting is cute and kludgy
One solution now is A ./ v[:,[1]]
It has the benefit of documenting which dimension is being broadcast on and generalizes to higher dimensional arrays
Oh and the v[:,[1]]
solution has the virtue of NOT taking the complex conjugate
which is probably what the user intends .....
I LIKE THESE TWO EXAMPLES because the first is a LINEAR ALGEBRA example where a complex conjugate is desired very frequently, but the second example is a MULTIDIMENSIONAL DATA EXAMPLE where we want things to work in all dimensions not just for matrices, and we very likely don't want a complex conjuugate
⋅
requires JuliaLang/julia#552. This is the third time it's shown up in the past two weeks.
Another reason to distinguish M[1,:] and M[:,1] is that currently our broadcasting behavior allows this very convenient behavior: M./sum(M,1) is column-stochastic and M./sum(M,2) is row-stochastic. The same could be done for normalization if we "fixed" the norm function to allow application over rows and columns easily. Of course, we could still have sum(M,1) and sum(M,2) return matrices instead of up and down vectors, but that seems a bit off.
It seems to me that while having the broadcasting behavior is nice some of the time, you end up having to squeeze those extra unit dims outs just as often. Thus, having to do the opposite some of the time is OK if the rest of the system is nicer (and I do think having scalar dimensions dropped will make the system nicer). Thus you will need a function like
julia> widen(A::AbstractArray,dim::Int) = reshape(A,insert!([size(A)...],dim,1)...)
# methods for generic function widen
widen(A::AbstractArray{T,N},dim::Int64) at none:1
which will allow code like M ./ widen(sum(M,2),2)
or A ./ widen(v,1)
(see @blakejohnson example above)
M[:,0,:] and v[:,0] ?????
I'm more with @blakejohnson on the reduction issue; I personally think it's clearer to squeeze
dimensions than to widen
them. I suspect I'd be perpetually looking at the docs to figure out whether widen
inserts the dimension at or after the indicated index, and the numbering gets a little more complex if you want to widen over multiple dimensions at once. (What does widen(v, (1, 2))
for a vector v
do?) None of these are issues for squeeze
.
Regardless of whether we would widen or squeeze per default, I think that Julia should follow the lead from numpy when it comes to widening and allow something like v[:, newaxis]
. But I do believe that I prefer to keep dimensions instead of discarding them, it's harder to catch a bug where you accidentally widened the wrong way than when you squeezed the wrong way (which will usually give an error).
In the list of @alanedelman I feel that
v*A returns a vector if A is a matrix
is not good.
v_A should be an error if A is not 1x1 (mismatch of the range of index) v'_A should be the proper way to do it.
One way to handle this issue is to automatically convert vector v to nx1 matrix (when needed) and always treat v' as 1xn matrix (never convert that to a vector or nx1 matrix) Also we allow to automatically convert 1x1 matrix to a scaler number (when needed).
I feel that this represents a consistent and uniform way to think about linear algebra. (good mathematics)
A uniform way to handle all those issues is to allow automatic (type?) conversion (when needed) between array of size (n), (n,1), (n,1,1),(n,1,1,1) etc (but not between arrays of size (n,1) and (1,n) ) (Just like we automatically convert real number to complex number when needed)
In this case an array of size (1,1) can be converted to a number (when needed) (See JuliaLang/julia#4797 )
Xiao-Gang (a physicist)
This leaves v'_A however .... I really want v'_A*w to work
My impression of linear algebra in Julia is that it is very much organized like matrix algebra, although scalars and true vectors exist (which I think is good!)
Let us consider how to handle a product like x*y*z*w
, where each factor may be a scalar, vector, or matrix, possibly with a transpose on it. Matrix algebra defines the product of matrices, where a matrix has size n x m
. One approach would be to extend this definition so that n
or m
might be replaced by absent
, which would act like a value of one as far as computing the product is concerned, but is used to distinguish scalar and vectors from matrices:
absent x absent
n x absent
absent x n
Ideally, we would like to arrange things so that we never need to represent row vectors, but it would be enough to implement operations like x'*y
and x*y'
. I get the feeling that this is the flavor of scheme that many of us are searching for.
But I'm beginning to suspect that banning row vectors in this kind of scheme will come at a high cost. Example: Consider how we would need to parenthesize a product to avoid forming a row vector at any intermediate step: (a
is a scalar, u
and v
are vectors)
a*u'*v = a*(u'*v) // a*u' is forbidden
v*u'*a = (v*u')*a // u'*a is forbidden
To evaluate a product x*y'*z
while avoiding to produce row vectors, we would need to know the types of the factors before picking the multiplication order! If the user should do it himself, it seems like an obstacle to generic programming. And I'm not sure how Julia could do it automatically in a sane way.
Another reason not to fix the multiplication order in advance: I seem to remember that there was an idea to use dynamic programming to choose the optimal evaluation order of *(x,y,z,w)
to minimize the number of operations needed. Anything that we do to avoid forming row vectors will likely interfere with this.
So right now, introducing a transposed vector type seems like the most sane alternative to me. That, or doing everything as now but dropping trailing singleton dimensions when keeping them would result in an error.
Transposition is just a particular way to permute modes. If you allow v.'
where v
is a vector, then permutedims(v,[2 1])
should return the exact same thing. Either both return a special row vector type, or they introduce a new dimension.
Having a special type for row vectors doesn't look like a good solution to me, because what will you do about other types of mode-n vectors, e.g., permutedims([1:4],[3 2 1])
? I urge you to also take the multilinear algebra into account before taking a decision.
@toivoh mentioned that
"One approach would be to extend this definition so that n or m might be replaced by absent, which would act like a value of one as far as computing the product is concerned, but is used to distinguish scalar and vectors from matrices:
In multi linear algebra (or for high rand tensors) the above proposal correspond to use absent to represent many indices of range 1, ie size (m,n,absent) may correspond to (m,n), (m,n,1), (m,n,1,1), etc.
If we use this interpretation of absent , then 1. and 2. is OK and nice to have, but 3. may not be OK. We do not want to mix arrays of size (1,n) and (1,1,n).
I'm not a specialist in tensor theory, but I have used all the systems mentioned above (without any add-on packages) for substantial projects involving linear algebra.
[TL;DR: skip to SUMMARY]
Here are the most common scenarios in which I have found a need for greater generality in array handling than commonplace matrix-vector operations:
(1) Functional analysis: For instance, as soon as you're using the Hessian of a vector-valued function, you need higher-order tensors to work. If you're writing a lot of math, it would be a huge pain to have to use special syntax for these cases.
(2) Evaluation control: For instance, given any product that can be computed, one should be able to compute any sub-entity of that product separately, because one might wish to combine it with multiple different sub-entities to form different products. Thus Toivo's concern about, e.g., a*u'
being forbidden is not just a compilation concern, but a programming concern; an even more common variant is pre-computing x'Q
to compute quadratic forms x'Q*y1
, x'Q*y2
, ... (where these must be done sequentially).
(3) Simplifying code: Several times when dealing with arithmetic operations mapped over multi-dimensional data sets, I have found that 6-7 lines of inscrutable looping or function-mapping code can be replaced with one or two brief array operations, in systems that provide appropriate generality. Much more readable, and much faster.
Here are my general experiences with the above systems:
MATLAB: Core language is limited beyond commonplace matrix-vector ops, so usually end up writing loops with indexing.
NumPy: More general capability than MATLAB, but messy and complicated. For almost every nontrivial problem instance, I had to refer to documentation, and even then sometimes found that I had to implement myself some array operation that I felt intuitively should have been defined. It seems like there are so many separate ideas in the system that any given user and developer will have trouble automatically guessing how the other will think about something. It is usually possible to find a short, efficient way to do it, but that way is not always obvious to the writer or reader. In particular, I feel that the need for widening and singleton dimensions just reflects a lack of generality in the implementation for applying operators (though maybe some find it more intuitive).
Mathematica: Clean and very general---in particular, all relevant operators are designed with higher-order tensor behavior in mind. Besides Dot, see for example the docs on Transpose, Flatten / Partition, and Inner / Outer. By combining just these operations, you can already cover most array-juggling use cases, and in version 9 they even have additional tensor algebra operations added to the core language. The downside is that even though the Mathematica way of doing something is clean and makes sense (if you know the language), it may not obviously correspond to the usual mathematical notation for doing it. And of course, the generality makes it difficult to know how the code will perform.
scmutils: For functional analysis, it is clean, general, and provides the most mathematically intuitive operations (both write and read) of any of the above. The up/down tuple idea is really just a more consistent and more general extension of what people often do in written mathematics using transpose signs, differentiation conventions, and other semi-standardized notions; but everything just works. (To write my Ph.D. thesis, I ended up developing a consistent and unambiguous notation resembling traditional math notation but isomorphic to Sussman & Wisdom's SICM syntax.) They've also used it for a differential geometry implementation [1], which has inspired a port to SymPy [2]. I have not used it for for data analysis, but I would expect that in a generic array context where you only wanted one kind of tuple (like Mathematica's List), you could just pick one ("up") by convention. Again, generality obscures performance considerations to the programmer, but I would hope this is an area where Julia can excel.
SUMMARY
I think the proposed transposed-vector type should be characterized as the more general "down" tuple in scmutils, while regular vectors would be the "up" tuples. Calling them something like "vector" and "transposed-vector" would probably make more sense to people than calling them "up" and "down" (at the cost of brevity). This would support three categories of use:
(1) for data analysis, if people just want nested arrays, they only need "vector"; (2) for basic matrix-vector linear algebra, people can use "vector" and "transposed-vector" in direct correspondence with mathematical convention ("matrix" would be equivalent to a "transposed-vector" of "vector"s); (3) for higher-order tensor operations (where there's less standardization and people usually have to think anyway), the implementation should support the full generality of the two-kind tuple arithmetic system.
I believe this approach reflects the emerging consensus above for the results of various operations, with the exception that those cases that earlier posts considered errors (v'
and v*A
) would actually give meaningful (and often useful) results.
[1] http://dspace.mit.edu/handle/1721.1/30520 [2] http://krastanov.wordpress.com/diff-geometry-in-python/
@thomasmcoffee sound like you are advocating for an explicit distinction between co- and contravariant vectors then.
I would think of that as a common application, but overly specific for what I'm advocating: to me, that has a geometric meaning implying a restriction to rectangular tensors of numbers (for coordinate representations). Since I imagine (without expertise in this area) that a suitable library of tensor algebra functions with standard arrays would usually suffice for this purpose, I'm sympathetic to Alan's point that this alone is not compelling enough to introduce a two-kind system in the core language.
I'm mainly thinking of other applications depending on more general nested structure, for instance, calculus on functions of multiple arguments of mixed dimensionality, which would be more difficult to develop as an "add-on" later if the core language did not support this distinction. Maybe we mean the same thing.
The problem with up and down vectors is that you need to extend the idea to general arrays. Otherwise vectors become something special and separate from arrays, instead of simply the 1-dimensional case of an array, which would lead to whole mess of awful problems. I've thought a lot about how to do that, but haven't come up with any acceptable. If you've got any good ideas of how to sanely generalize up and down vectors to arrays, I'd love to hear them.
Just trying to extrapolate this idea. As I understand it, in order to use an array to calculate with up and down vectors, you need to indicate for each dimension whether it is up or down. In general, this could be achieved by wrapping an array in something like
immutable UpDownTensor{T, N, UPMASK} <: AbstractArray{T, N}
A::AbstractArray{T, N}
end
where UPMASK
would be a bit mask to indicate which dimensions are up. Then operations on non-wrapped arrays could be implemented by providing a default UPMASK
as a function of N
: vectors would default to a single up, matrices to the first up and the second down; then I'm not sure how it would be reasonably continued.
Some random thoughts:
Well, this is certainly one generalization of the Transposed
type, and it certainly has some merit. Not sure whether it is feasible.
I think Toivo's suggestion is a reasonable realization of what I was advocating above. To me, the most sensible default is to continue alternating directions at higher orders: for instance, if someone provided the components of a power series as non-wrapped arrays, this would do the right thing.
But on further reflection, I think it could be very powerful to combine both ideas: (1) distinctions between up and down vectors and (2) distinctions between arrays and vectors. Then you could have objects where some dimensions are up, some are down, and some are "neutral." The reason to distinguish arrays and vectors is that, semantically, arrays are for organization (collecting multiple things of the same kinds), whereas vectors are for coordinatization (representing multidimensional spaces). The power of combining both distinctions in one object is that it can then serve both these purposes simultaneously. The neutral dimensions would be treated according to broadcasting rules, while the up/down dimensions would be treated according to tensor arithmetic rules.
Returning to an earlier example of mine, suppose you're computing a number of quadratic forms x'Q*y1, x'Q*y2, ...
for different vectors y1, y2, ...
. Following SICM, denote up tuples (column vectors) by (...)
and down tuples (row vectors) by [...]
. If you wanted to do this all at once, and you're stuck with up/down only, the conventional way would be to combine the yi
into a matrix Y = [y1, y2, ...]
using a down tuple (of up tuples), and compute r = x'Q*Y
, which gives you the results in a down tuple r
. But what if you want to multiply each of these results by a (column) vector v
? You can't just do r*v
, because you'll get a contraction (dot product). You can convert r
to an up tuple, and then multiply, which gives you your results in an up tuple (of up tuples). But suppose for the next step you need a down tuple? Semantically, you have a dimension going through your calculation that just represents a collection of things, which you always want broadcasted; but to achieve this in the strictly up/down world, you have to keep doing arbitrary context-dependent conversions to elicit the right behavior.
In contrast, suppose you also have neutral tuples (arrays), denoted {...}
. Then you naturally write ys = {y1, y2, ...}
as an array (of up tuples), so that r = x'Q*ys
is an array, and r*v
is also an array (of up tuples). Everything makes sense, and no arbitrary conversions are required.
Stefan suggests that distinguishing 1-D arrays from up/down vectors is disastrous, but I think this problem gets solved by the fact that most functions make sense operating on vectors or on arrays, but NOT either. (Or, on matrices or on arrays of vectors or on vectors of arrays or on arrays of arrays, but NOT either. And so on.) So with appropriate conversion rules, I haven't thought of a common case that wouldn't do the right thing automatically. Maybe someone can?
Upon looking deeper [1], I discovered that scmutils actually distinguishes what they call "vectors" from up and down tuples under the hood; but currently the conversion rules are set up so that these "vectors" are mapped to up tuples (as I had proposed earlier) whenever they enter the up/down world, with the caveat that "We reserve the right to change this implementation to distinguish Scheme vectors from up tuples." (Perhaps someone on campus can ask GJS if he had any specific ideas in mind.) The Sage system [2] largely separates handling of arrays from vectors and matrices (currently no core support for tensors), and the only problems I've experienced with this have to do with its lack of built-in conversion between them in cases that would obviously make sense.
[1] http://groups.csail.mit.edu/mac/users/gjs/6946/refman.txt --- starting at "Structured Objects" [2] http://www.sagemath.org/
I was talking to @jiahao at the lunch table and he mentioned that the Julia team was trying to figure out how to generalize linear algebra operations to higher dimensional arrays. Two years ago I spent several months thinking about this because I needed it for KroneckerBio. I wanted to share my approach.
Let's consider just the product between two arrays for the moment. Other operations have a similar generalization. The three most common types of products when dealing with arrays are the outer product, the inner product, and the elementwise product. We usually think of doing operations like this between two objects, such as inner(A,B)
or A*B
. When doing these operations on higher-dimensional arrays, however, they are not done between the arrays as a whole, but between particular dimensions of the arrays. Multiple outer/inner/elementwise suboperations happen in a single operation between two arrays and each dimension of each array must be assigned to exactly one suboperation (either explicitly or to a default). For the inner and elementwise products, one dimension on the left must be paired with an equally sized dimension on the right. The outer product dimensions do not have to be paired. Most of the time the user is doing either an inner product or an elementwise product between a pair of dimensions and an outer product for all others. The outer product makes a good default because it is the most common and does not have to be paired.
I usually think about the dimensions as being named rather than being ordered, much like the x, y, and z axes of a plot. But if you want users to actually be able to access the arrays by ordered indexing (like A[1,2,5]
rather than A[a1=1, a3=5, a2=2]
) then you have to have a consistent procedure to order the results of an operation. I propose ordering the result by listing all the dimensions of the first array followed by listing all the dimensions of the second array. Any dimensions that participated in an inner product are squeezed out, and for dimensions that participated in an elementwise product, only the dimension from the second array gets squeezed out.
I am going to make up some notation for this. Feel free to Juliafy it. Let A
be an array that is a1
by a2
by a3
and let B
be an array that is b1
by b2
. Let's say that array_product(A, B, inner=[2, 1], elementwise=[3, 2])
would take the inner product between dimensions a2
and b1
, the elementwise product between a3
and b2
, and the outer product of a1
. The result would be an array that is a1
by a3
.
It should be clear that no binary or unary operator is going to have much meaning in the context of higher-dimension arrays. You need more than two arguments in order to specify what to do with each dimension. However, you can recapture the ease of linear algebra by making the Matlab operators shorthand for array operations on just the first two dimensions:
Matlab's A*B
is array_product(A, B, inner=[2,1])
.
Matlab's A.'
is permute(A, B, [2,1])
where permute keeps unchanged all dimensions higher than the count of the third argument.
You can choose whether or not to throw errors when the dimensionality of the arrays is greater than 2 or even not equal to 2, like Mathematica does with vector transposes. If you are using just the general array calculations, you do not have to decide whether or not to take @wenxgwen's suggestion to interpret all (n, m) arrays as (n, m, 1) and (n, m, 1, 1). Only when using the linear algebra operators or other operators that expect array or particular dimensionality do you have to make this decision. I like @wenxgwen's suggestion, because there is little downside in a dynamically typed language.
I wrote a more detailed description of my system, which includes addition, exponentiation, and differentiation along with the chain rule and product rule for array calculus .
Thanks for the perspective! I found this quite enlightening to understand what kind of beast that a general array * array product really is.
May be interesting to cross-reference the proposals for multidimensional array multiplication with the semantics proposed for a matrix multiplication operator in PEP 0465. In particular:
1d vector inputs are promoted to 2d by prepending or appending a '1' to the shape, the operation is performed, and then the added dimension is removed from the output. The 1 is always added on the "outside" of the shape: prepended for left arguments, and appended for right arguments. The result is that matrix @ vector and vector @ matrix are both legal (assuming compatible shapes), and both return 1d vectors; vector @ vector returns a scalar... An infelicity of this definition for 1d vectors is that it makes @ non-associative in some cases ((Mat1 @ vec) @ Mat2 != Mat1 @ (vec @ Mat2)). But this seems to be a case where practicality beats purity
Ducking typing in Python causes a special problem. Naturally, arrays and matrices should be interchangeable (same underlying data). But because Python discourages the checking of a type, matrices are not cast to the correct interface at the beginning of a function that expects an array and vice versa. This is why they must have different operator characters. Julia with runtime type checking and convert
methods does not suffer from this ambiguity.
From PEP 0465:
An infelicity of this definition for 1d vectors is that it makes @ non-associative in some cases ((Mat1 @ vec) @ Mat2 != Mat1 @ (vec @ Mat2))
Notably, this kind of definition can produce incorrect results in Mathematica, because Dot
(.
) is assumed to be associative (Flat
) when evaluated symbolically (as with f
below, but not with g
):
In[1]:= f=X.(y.Z);
g:=X.(y.Z)
In[3]:= Block[{
X=Array[a,{2,2}],
y=Array[b,2],
Z=Array[c,{2,2}]
},{f,g}]
Out[3]= {{(a[1,1] b[1]+a[1,2] b[2]) c[1,1]+(a[2,1] b[1]+a[2,2] b[2]) c[2,1],(a[1,1] b[1]+a[1,2] b[2]) c[1,2]+(a[2,1] b[1]+a[2,2] b[2]) c[2,2]},{a[1,1] (b[1] c[1,1]+b[2] c[2,1])+a[1,2] (b[1] c[1,2]+b[2] c[2,2]),a[2,1] (b[1] c[1,1]+b[2] c[2,1])+a[2,2] (b[1] c[1,2]+b[2] c[2,2])}}
In[4]:= SameQ@@Expand[%]
Out[4]= False
From @drhagen:
Julia with runtime type checking and
convert
methods does not suffer from this ambiguity.
This is why I feel the right solution for Julia should let the data itself distinguish between array-like semantics (for universal broadcasting) and tensor-like semantics (for possible contraction).
I am by no means an authority here, but I don't think that the general arbitrary-dimensional collection type (Array
) should support an operator that does dot product. This operator simply cannot be sensibly defined for this type because the dot product can be between any two dimensions, requiring additional arguments that cannot be supplied to a binary operator. The same goes for all the linear algebra operations, inv
, transpose
, etc.
To operate in the mathematical field of linear algebra, there should then be three more types, Matrix
, ColumnVector
, and RowVector
, on which all the normal linear algebra operators and functions work as normal.
Now that the type structure is defined well, you can make it easier on the user by adding implicit conversion for Matrix
to Array{2}
, ColumnVector
to Array{1}
, and RowVector
to Array{2}
(not sure about this one), Array{2}
to Matrix
, and Array{1}
to ColumnVector
.
My proposal above (https://github.com/JuliaLang/LinearAlgebra.jl/issues/42) allows each dimension of a multidimensional structure to distinguish whether it has neutral ("collection"/"array"), up ("column"), or down ("row") semantics. I think what you're describing is then a special case.
The advantage of this general approach is that even in computations with many dimensions of data or space, you can get operators to just do the right thing without explicitly specifying which dimensions it should operate on. I think we agree that, at least in Julia, it is much more intuitive and readable for a user to specify once the meaning of the input data by choosing type parameters, than to have to specify the meaning of every operation by calling every instance with additional arguments giving dimensional indices. Implicit or explicit conversions can still be used, with full-dimensional generality, in the cases where the meaning must be changed midstream in unusual ways.
@thomasmcoffee I like your proposal very much. I implemented something vaguely similar in a DSL (long ago, far away) with a few guiding principles (aka personal opinions):
The biggest complaints I got back then (and they were loud) were about exactly the kind of inconvenience which your trivalent semantics (adding a neutral collection notion) solve. Nice! That idea never occurred to me, but makes so much sense now that you put it out there. I'd really like to use such a system, and I mean for real work. It would be nice if Julia could accommodate this!
What you guys seem to be describing is regular tensors. I doubt that is a common enough use case alone to justify being in the standard library, as the other two use cases (collections and linear algebra) are far more common. However, if it could be integrated seamlessly, I would support it. Could you give some examples of what some common operations would look under this system, such as vector-matrix multiplication, scalar-array multiplication, distributing the addition of one array over an array of arrays, etc.?
I think you're right David. We're really talking about two use cases.
The subset of Linear Algebra is most commonly needed by most people as you say. Even there I advocate maintaining a distinction between v and v'.
What I really would like (insert disclosure of greed here) is Tensors with first-class (or close) status...near native speed (in the limiting case, compared to Linear Algebra performance) with easy syntax, no overwrought typing issues, with co/contravariance encoded in the data, not foisted onto the operators. Once I define the data semantics, the operations should just work. Tensoral Duck Typing.
Perhaps tensors and TDT belong in a package and not in core, just on grounds of relative popularity. But like the Julia declaration of independence says, Julia is born of greed. And like Gordon Gecko says, greed is good. :) On Mar 21, 2014 3:14 AM, "David Hagen" notifications@github.com wrote:
What you guys seem to be describing is regular tensors. I doubt that is a common enough use case alone to justify being in the standard library, as the other two use cases (collections and linear algebra) are far more common. However, if it could be integrated seamlessly, I would support it. Could you give some examples of what some common operations would look under this system, such as vector-matrix multiplication, scalar-array multiplication, distributing the addition of one array over an array of arrays, etc.?
Reply to this email directly or view it on GitHubhttps://github.com/JuliaLang/LinearAlgebra.jl/issues/42 .
I think seamless integration is definitely achievable given a rich enough type family. Extending Toivo's https://github.com/JuliaLang/LinearAlgebra.jl/issues/42 above, it might notionally start off like this:
immutable AbstractTensorArray{T, N, UPMASK, DOWNMASK} <: AbstractArray{T, N}
A::AbstractArray{T, N}
end
# where !any(UPMASK & DOWNMASK)
typealias AbstractColumnVector{T} AbstractTensorArray{T, 1, [true], [false]}
typealias AbstractRowVector{T} AbstractTensorArray{T, 1, [false], [true]}
typealias AbstractMatrix{T} AbstractTensorArray{T, 2, [false, true], [true, false]}
(currently AbstractMatrix{T}
simply aliases AbstractArray{T, 2}
; potentially, another name could be used here)
From here the following implementations seem logical:
transpose
method, after rearranging dimensions and the corresponding UPMASK and DOWNMASK indices, then interchanges the UPMASK and DOWNMASK. Neutral dimensions would be unaffected.AbstractArray{T, N}
subtypes are typically converted by default to corresponding alternating AbstractTensorArray{T, N, [..., false, true, false, true], [..., true, false, true, false]}
subtypes in tensor operations. This preserves the existing semantics of Julia's special array syntax for vectors and matrices.array
) for AbstractTensorArray
is used to produce neutral dimensions, and can combine other AbstractTensorArray
s (or types convertible thereto) to create a combined AbstractTensorArray
with neutral top-level dimension.Considering @drhagen's examples:
vector-matrix multiplication, scalar-array multiplication
c = 1 # Int
v = [1, 2] # Array{Int, 1}
M = [[1, 2] [3, 4]] # Array{Int, 2}
# scalar-array
c * M # UNCHANGED: *(Int, Array{Int, 2}) => Array{Int, 2}
# matrix-vector
M * v # *(Array{Int, 2}, Array{Int, 1}) => *(Matrix{Int}, ColumnVector{Int}) => ColumnVector{Int}
# vector-matrix
v' * M # transpose(Array{Int, 1}) => transpose(ColumnVector{Int}) => RowVector{Int}
# *(RowVector{Int}, Array{Int, 2}) => *(RowVector{Int}, Matrix{Int}) => RowVector{Int}
# (1-array)-(2-array)
v .* M # UNCHANGED: .*(Array{Int, 1}, Array{Int, 2}) => Array{Int, 2}
(using Matrix
with a definition corresponding to the AbstractMatrix
definition above)
distributing the addition of one array over an array of arrays
I take this to mean, semantically, the addition of one vector over an array of vectors, addition of one matrix over an array of matrices, and so on:
# vector-(vector-array)
ws = array([1, 2], [3, 4])
# TensorArray{Int, 2, [false, true], [false, false]}
v + ws # +(Array{Int, 1}, TensorArray{Int, 2, [false, true], [false, false]}) => +(ColumnVector{Int}, TensorArray{Int, 2, [false, true], [false, false]}) => TensorArray{Int, 2, [false, true], [false, false]}
# => array([2, 4], [4, 6])
# array-(vector-array)
u = array(1, 2) # TensorArray{Int, 1, [false], [false]}
u + ws # +(TensorArray{Int, 1, [false], [false]}, TensorArray{Int, 2, [false, true], [false, false]}) => TensorArray{Int, 2, [false, true], [false, false]}
# => array([2, 3], [5, 6])
# alternatively:
v .+ ws # .+(Array{Int, 1}, TensorArray{Int, 2, [false, true], [false, false]}) => TensorArray{Int, 2, [false, true], [false, false]}
# => array([2, 3], [5, 6])
# same effect, but meaning less clear:
v .+ M # UNCHANGED: .+(Array{Int, 1}, Array{Int, 2}) => Array{Int, 2}
# => [[2, 4] [4, 6]]
# matrix-(matrix-array)
Ns = array([[1, 2] [3, 4]], [[5, 6] [7, 8]])
# TensorArray{Int, 2, [false, false, true], [false, true, false]}
M + Ns # +(Array{Int, 2}, TensorArray{Int, 2, [false, false, true], [false, true, false]}) => +(Matrix{Int}, TensorArray{Int, 2, [false, false, true], [false, true, false]}) => TensorArray{Int, 2, [false, false, true], [false, true, false]}
# => array([[2, 4] [6, 8]], [[6, 8] [10, 12]])
Considering my earlier example of scaling a vector v
by several different quadratic forms x'M*w1, x'M*w2, ...
, for a final result x'M*w1*v, x'M*w2*v, ...
:
x = v
x' * M * ws * v # *(RowVector{Int}, Array{Int, 2}) => *(RowVector{Int}, Matrix{Int}) => RowVector{Int}
# *(RowVector{Int}, TensorArray{Int, 2, [false, true], [false, false]}) => TensorArray{Int, 1, [false], [false]}
# *(TensorArray{Int, 1, [false], [false]}, Array{Int, 1}) => *(TensorArray{Int, 1, [false], [false]}, ColumnVector{Int}) => TensorArray{Int, 1, [false, true], [false, false]}
# => array([27, 54], [59, 118])
In this notional implementation, I've assumed that AbstractArray
is left alone, and thus AbstractTensorArray
forms its own "space" in the type hierarchy. Things might be simplified if the whole AbstractArray
family were simply replaced by AbstractTensorArray
, but that's another discussion.
In the context of a package for something in quantum physics, I have been playing around with defining my own tensor type (actually more than one). Initially, I also had some notion of indices coming in two flavours (up and down, incoming and outgoing, covariant and contravariant, however you want to call it), with this information being stored in either a field in the type or even in a type parameter. After a while I decided this is just to big of a hassle. It is much easier to just associate the indices of the tensor to a vector space (which I was doing anyway already) and to allow this vector space to have a dual which is different. In practice, by vector space I just mean a simple Julia type which wraps the dimension of the space and whether it is the dual or not. If a tensor index is associated to a normal vector space, it is an up index, if it is associated to a dual index, it is a down index. Do you want to work with tensors/arrays for which there is no distinction, you just define a different vector space type which doesn't distinguish between the normal vector space and its dual.
In this proposal, you can only contract tensor indices which are associated to vector spaces which are each others dual. ctranspose (=Hermitian conjugation) maps the vector space of every index onto its dual (together with the permutation of the indices in the case of a matrix, and a preferred definition for higher order tensors) etc.
Of course, normal transposition and complex conjugation are not really well defined in this setting (i.e. these are not basis independent concepts)
Minimalistically, it looks something like this:
immutable Space
dim::Int
dual::Bool
end
Space(dim::Int)=Space(dim,false) # assume normal vector space by default
dual(s::Space)=Space(s.dim,!s.dual)
matrix=Tensor((Space(3),dual(Space(5))))
# size is no longer sufficient to characterise the tensor and needs to be replaced by space
space(matrix) # returns (Space(3),dual(Space(5)))
space(matrix') # returns (Space(5),dual(Space(3)))
Of course, you could invent some syntax so as not to have to write Space constantly. You can create a different space type for which dual(s)==s in order to have tensors which do not distinguish between up and down indices, and so on. But of course there is no way this can still be built into Julia's standard Array type without breaking everything...
I use Matlab a lot for my daily scientific computing, but a newbie on Julia. Here, I notice that there are a lot of discussions on high-dimensional Array multiplication and transposition or other similar array operations. I suggest to take a look at http://www.mathworks.com/matlabcentral/fileexchange/8773-multiple-matrix-multiplications--with-array-expansion-enabled
It basically follows the syntax similar to what @drhagen has mentioned in an earlier post, such as array_product(A, B, inner_A_dim=[1, 2], inner_B_dim=[3, 4]) for a product between arrays A and B on the given inner dimensions.
This is one Matlab package that can apply multiplications or transposition operations on some selected dimensions. There is a manual in the package on how to implement these operations in Matlab, but I think the math theory should apply for other languages as well. Their idea is to implement array operations by avoiding using For-loops, and mostly relying on array reshaping and so on. So, it is extremely fast in Matlab. I don't know if Julia more like vectorized operations or devectorized operations (seems the later one). I feel that vectorized operation is an advantage for high-dimensional array operations, if the core supports it. Maybe we should sincerely consider this kind of array operations at this stage.
For your reference: Another similar implementation in Matlab for INV operation is here: http://www.mathworks.com/matlabcentral/fileexchange/31222-inversion-every-2d-slice-for-arbitrary-multi-dimension-array
Also notice that, after the release of the Matlab array operation support package in 2005, downloading record is maintained in a high level as up to today. As in my practical experience, array operations are very frequently used in physics and other areas. I would say, if Julia has similar functions for operating arrays with arbitrary sizes, the game will become very interesting!
another vote here for @alanedelman's proposed solution towards the top. here's a motivating example.
right now, a row-slice is a 2d array, while a column slice is a 1d array; which is weirdly asymmetric and ugly:
julia> A = randn(4,4)
4x4 Array{Float64,2}:
2.12422 0.317163 1.32883 0.967186
-1.0433 1.44236 -0.822905 -0.130768
-0.382788 -1.16978 -0.19184 -1.15773
-1.2865 1.21368 -0.747717 -0.66303
julia> x = A[:,1]
4-element Array{Float64,1}:
2.12422
-1.0433
-0.382788
-1.2865
julia> y = A[1,:]
1x4 Array{Float64,2}:
2.12422 0.317163 1.32883 0.967186
in particular, it means I can't multiply a row by a column and extract a number without doing some terribly ugly manipulation like
julia> dot(y[:],x)
2.4284575954571106
julia> (y*x)[1]
2.42845759545711
It's not a coherent proposal unless you make '*
a special operator, which is pretty dubious, since then x'*y
and (x')*y
do not mean the same thing. Moreover, it would make multiplication non-associative.
i understand the difficulties with x_y' and y'_x. It may be better to treat inner and outer products as separate operations, using eg dot(). (Perhaps also using \cdot?)
But what are the arguments in favor of having a slice along the first dimension return an object whose dimension is different than a slice along the second dimension? For consistency, it seems that every time you slice, the dimension of the resulting object should be diminished by one.
On Wed, Jul 16, 2014 at 8:17 PM, Stefan Karpinski notifications@github.com wrote:
It's not a coherent proposal unless you make '* a special operator, which is pretty dubious, since then x'_y and (x')_y do not mean the same thing. Moreover, it would make multiplication non-associative.
— Reply to this email directly or view it on GitHub https://github.com/JuliaLang/LinearAlgebra.jl/issues/42.
Madeleine Udell PhD Candidate in Computational and Mathematical Engineering Stanford University www.stanford.edu/~udell
from @alanedelman:
We really should think carefully about how the transpose of a vector should dispatch the various
A_*op*_B*
methods. It must be possible to avoid new types and ugly mathematics. For example, vector'vector yielding a vector (#2472, JuliaLang/LinearAlgebra.jl#8), vector' yielding a matrix, and vector'' yielding a matrix (#2686) are all bad mathematics.What works for me mathematically (which avoids introducing a new type) is that for a 1-dimensional
Vector
v
:v'
is a no-op (i.e. just returnsv
),v'v
orv'*v
is a scalar,v*v'
is a matrix, andv'A
orv'*A
(whereA
is anAbstractMatrix
) is a vectorA general N-dimensional transpose reverses the order of indices. A vector, having one index, should be invariant under transposition.
In practice
v'
is rarely used in isolation, and is usually encountered in matrix-vector products and matrix-matrix products. A common example would be to construct bilinear formsv'A*w
and quadratic formsv'A*v
which are used in conjugate gradients, Rayleigh quotients, etc.The only reason to introduce a new
Transpose{Vector}
type would be to represent the difference between contravariant and covariant vectors, and I don't find this compelling enough.