Closed jasax closed 9 years ago
Not positive, but I think you actually want to be using F[:Rs]
as the diagonal elements of a scaling matrix, not as a vector. Some of the other issues are due to not currently having true sparse vectors, the same way we have 1-d dense vectors. Sparse vectors are inconsistently implemented as n-by-1 CSC matrices, which leads to some problems.
This is a bit awkward, as well as the existence of a 1-D Array type which after double transposition (should return the original object) becomes a different type. The code below shows that the "black sheep" is the original vector, g; if g was created as a 5x1 Array{Float64,2} (which g becomes in, after double transposition), instead of as a 5-element Array{Float64,1}, the inconsistency wouldn't exist. But that change (vector becomes matrix) probably should have some impact (in time of execution) in many of vector uses... :-)
julia> g=[1;2;3;4;5.0] 5-element Array{Float64,1}: 1.0 2.0 3.0 4.0 5.0 julia> g==g'' false julia> sparse(g)==sparse(g'') true julia> g' 1x5 Array{Float64,2}: 1.0 2.0 3.0 4.0 5.0 julia> g'' 5x1 Array{Float64,2}: 1.0 2.0 3.0 4.0 5.0 julia> sparse(g) 5x1 sparse matrix with 5 Float64 entries: [1, 1] = 1.0 [2, 1] = 2.0 [3, 1] = 3.0 [4, 1] = 4.0 [5, 1] = 5.0 julia> sparse(g'') 5x1 sparse matrix with 5 Float64 entries: [1, 1] = 1.0 [2, 1] = 2.0 [3, 1] = 3.0 [4, 1] = 4.0 [5, 1] = 5.0 julia> (sparse(g))'==sparse(g') true julia> full(sparse(g)')==g' true
I apologize if these questions have already been discussed and settled down previously (as probably they have...) but I couldn't find when searching.
Thanks @jiahao. The funny thing is that the sparse() and full() functions have eliminated the inconsistency, that is, sparse objects are "mathematically" consistent with transposition :-)
More funny stuff :-) (we can always use multiple dispatch to tackle both types...)
julia> function f(v::Array{Float64,1}) println(v) end f (generic function with 1 method) julia> g 5-element Array{Float64,1}: 1.0 2.0 3.0 4.0 5.0 julia> f(g) [1.0,2.0,3.0,4.0,5.0] julia> f(g'') ERROR: `f` has no method matching f(::Array{Float64,2}) julia> f(full(sparse(g))) ERROR: `f` has no method matching f(::Array{Float64,2}) julia> function f(v::Array{Float64,2}) println(v) end f (generic function with 2 methods) julia> f(g'') [1.0 2.0 3.0 4.0 5.0]
But i'm diverging... the original issue was that the v .* Msparse operation doesn't work, while v .* M does... and I don't know if it is seen as a "bug" or as a "feature" in Julia :-).
sparse objects are "mathematically" consistent with transposition :-)
Only because sparse vectors are currently implemented in a more Matlab-ish manner than the Julia way dense vectors work.
v .* Msparse operation doesn't work, while v .* M does... and I don't know if it is seen as a "bug" or as a "feature" in Julia :-)
Bug. Sparse vectors should probably be the 1-D case of general N-D COO sparse, rather than shoehorned in as 1-column CSC matrices like they are now.
The inconsistency of dense g''
, as Jiahao pointed out, is a separate bug in Julia's handling of transpose vectors. There's another PR open for a Transpose type which I think would be a nicer solution here. And if we had true 1-D sparse vectors, then the same behaviors would apply to both sparse and dense 1-D vectors w.r.t. transposition and linear algebraic operations.
Oh, and the v .* Msparse
actually has another cause unrelated to sparse vectors - the broadcasting behavior of v .* Mdense
just isn't implemented in the sparse version of .*
.
julia> [5,6] .* [1 2; 3 4]
2x2 Array{Int64,2}:
5 10
18 24
There are cases where this is what you want, if :Rs
stands for row scaling. But evidently the not quite, rather the sparse methods are taking priority:broadcast
machinery isn't generalized to sparse.
julia> @which [5,6] .* [1 2; 3 4]
.*(As::AbstractArray{T,N}...) at broadcast.jl:278
julia> @which [5,6] .* sparse([1 2; 3 4])
.*(A::Array{T,N},B::SparseMatrixCSC{Tv,Ti<:Integer}) at sparse/sparsematrix.jl:634
Line 634 of sparse/sparsematrix.jl is (.*)(A::Array, B::SparseMatrixCSC) = (.*)(sparse(A), B)
, which doesn't do broadcasting correctly because sparse vectors are not 1-D.
OK, thanks @tkelman. Unfortunately I don't know too much (i.e. near zero...) of Julia's innards. I browsed umfpack.jl in the sources and saw that C functions from the package are being called (with ccall()). I cannot yet juggle with Julia's types and type system.Could not go to a spot and propose a patch :-(
I'm in the (spare time) process of building simulation tools for handling (hopefully) very large electric circuits, which is only viable in a sparse matrix landscape. Once upon a time I did those things in C, but now I was hoping to benefit from sparse implementation in Julia to avoid the malloc() - free() tango in C :-), use PCRE to parse input files, and escape from turtle velocity offered by Perl, Python, Ruby, etc... :-)
@jasax I think the best place to look is around line 634 of sparse/sparsematrix,jl. That file should be mostly pure-Julia code. I think unconditionally converting A::Array
to sparse(A)
might be the wrong thing to do for .*
here. You may be able to make headway experimenting with some alternatives.
I'm fiddling with sparse matrices and have found a few quirks :-)
What I pretend to do is use LU factorization in sparse matrices (for now in real matrices, but I hope to get it done also in sparse complex matrices)
First, in the manual, in the Linear Algebra section (http://docs.julialang.org/en/release-0.3/stdlib/linalg/) there is the relation between the L and U factors (in the case of F=lufact(A) operation they are named/extracted as F[:L] and F[:U] ) and a line and column permutation of the factorized sparse matrix A that reads as
F[:L] * F[:U] == Rs .* A[F[:p], F[:q]]
It seems to me (according to the explanation in that section) that it should read asF[:L] * F[:U] == F[:Rs] .* A[F[:p], F[:q]]
because Rs doesn't exist per se (it has to be extracted from the object F, and so is F[:Rs]).Aside that typo, additionally the RHS of that expression when evaluated gives an error due to a mismatch in dimensions. Indeed there is not a perfect equivalence/behavior of the .* operator (this doesn't happen with the .+ operator, for example) in real and in complex matrices, as the session below clearly shows. I end that session with an example of the failure, by applying lufact() and exemplifying the error.
I checked the code around "sparse/sparsematrix.jl:531" where the error is triggered and there is a check of conformant dimensions for matrix dot product .* (and of other operators)
(where A and B are the operands) which means that the .* operator only accepts operands with the same dimensions. I think that it is what triggers the error.
But then this configures a different behavior for .* in sparse and non sparse matrix representations.
Below is a session with julia 0.3.0 running in windows 7 in AMD 8-core Piledriver processor where the above cases are visible, with added comments. I experimented in julia 0.2.1 in Linux-Ubuntu (over Virtualbox) and the behavior of .* is the same as in windows 7; however, in 0.2.1 lufact() is not implemented for sparses, so I could not check it.