JuliaSparse / SparseArrays.jl

SparseArrays.jl is a Julia stdlib
https://sparsearrays.juliasparse.org/
Other
90 stars 50 forks source link

sum(sparse) -> dense? #43

Open yurivish opened 5 years ago

yurivish commented 5 years ago
julia> A = sparse([rand() < 0.01 ? 1 : 0 for _ in 1:50, _ in 1:50])
50×50 SparseMatrixCSC{Int64,Int64} with 23 stored entries:
[...]

julia> sum(A, dims=1)
1×50 Array{Int64,2}:
 0  1  0  0  0  0  0  0  1  0  1  0  0  0  0  1  0  1  0  0  1  1  0  0  0  2  3  0  0  0  0  0  0  0  0  0  0  0  1  1  1  0  0  1  1  0  2  1  0  3

Sometimes you have a very sparse matrix and want to sum a slice of it. The slice may potentially have very many columns or rows that are entirely zero, in which case it makes a lot of sense to preserve sparsity in the output.

In the non-slice case, where every row and column usually contains at least one nonzero value, it makes sense to keep the result dense.

It would be great if preserving sparsity in sum and other similar reductions were allowed via a keyword argument, eg. sparse=true.

cc @simonbyrne, who suggested the keyword idea on Slack a few months ago.

ViralBShah commented 5 years ago

I think the right thing to do is to always have a sparse output, and the user can explicitly convert it to dense if required.

abraunst commented 5 years ago

Note that

so summing in the dims=1 dimension would probably be slower with a sparse output. The dims=2 could maybe benefit some, but not sure how this "really really sparse" case is relevant. Note also that both A*ones(N) and ones(1,N)*A give dense arrays (i.e. alternative ways of achieving the same results).

yurivish commented 5 years ago

The dims=2 could maybe benefit some, but not sure how this "really really sparse" case is relevant.

If it helps to illustrate my concrete use case with some size numbers, I worked with a square matrix of 11 million x 11 million census blocks, and the sum was returning a dense vector of length 11 million x 100 or so.

I used findnz to find the nonzero elements in my columns of interest and then did a group sum by the row index, which was faster than using the built-in sum.

abraunst commented 5 years ago

The dims=2 could maybe benefit some, but not sure how this "really really sparse" case is relevant.

If it helps to illustrate my concrete use case with some size numbers, I worked with a square matrix of 11 million x 11 million census blocks, and the sum was returning a dense vector of length 11 million x 100 or so.

I used findnz to find the nonzero elements in my columns of interest and then did a group sum by the row index, which was faster than using the built-in sum.

Could you clarify the example? What do you mean by "columns of interest"? How many nonzeros in the output row column vector? What were the respective times (i.e. how much faster)?

yurivish commented 5 years ago

Sorry, here's a quick example where a hacky sparse sum using Dicts outperforms the naive sum. Perhaps I'm missing something and this isn't a rigorous benchmark by any means, but I think it replicates the essence of the situation I found myself in where I was slicing a big square matrix and summing some of the rows or columns together.

using SparseArrays, SplitApplyCombine
function test()
    s = let
        I = []; J = []; V = Int[]
        sz = 10_000_000
        for i in 1:sz
            j = rand(1:sz)
            v = rand(1:10)
            push!(I, i)
            push!(J, j)
            push!(V, v)
        end
        sparse(I, J, V)
    end;
    # my actual use had noncontiguous indexes in the second dimension
    slice = s[:, 100_000:100_150]; 
    @show typeof(slice)
    @time a = sum(slice, dims=2);
    @time b = let
        nt = ((i=i, j=j, v=v) for (i, j, v) in zip(findnz(slice)...))
        groupsum(x -> x.i, x -> x.v, nt)
    end
    a, b
end
a, b = test();

Timings on my laptop (2012 Macbook Pro) after compilation are

  0.465381 seconds (10 allocations: 152.588 MiB, 82.09% gc time)
  0.028346 seconds (29.54 k allocations: 1.358 MiB)
abraunst commented 5 years ago

Just to simplify the example, your slice looks very similar to s=sprand(10^7,150,10^-7,i->rand(1:10,i)). It is a bit extreme, with only ~ 150 nonzeros in 1.5*10^9 entries. The fastest way to sum I found in this case was:

function sum2(x::SparseMatrixCSC)
    o = spzeros(eltype(x), size(x,1))
    @inbounds for i=1:nnz(x)
        o[x.rowval[i]] += x.nzval[i]
    end
    o
end

I think that the problem with this kind of approaches is that it could fail badly when the density is a bit higher. For example, replacing the second 10^-7 with 10^-4 or so, which still gives a really sparse matrix (most entries in the output vector are still 0), then sum(s, dims=2) is about 40 times faster. The difference becomes much larger for higher densities of course. I think it's really hard to optimize for all possible scenarios.

Sacha0 commented 5 years ago

👍 for @abraunst's comments; optimal methods for hypersparse arrays often differ from those for sparse arrays. As an alternative to the keyword idea, we could extend sum! to accept a destination array, and dispatch on the destination type. Best!

StefanKarpinski commented 5 years ago

In general, the assumption for a SparseMatrix{R,C}SC, S, is that the number of non-zeros is roughly O(size(S,1) + size(S,2)). Since the result size of sum(S, dims=1) and sum(S, dims=2) are both of that size, it seems like the reasonable choice is to return a dense vector in both cases. For CSC the storage it makes no sense to return a sparse vector when summing along the columns, for example. For summing along the rows, the result might happen to be sparse if all the row values happen to be in a small subset of rows, but that's not something that should generally be expected, and there's no real reason to expect that any more than the result of summing the rows of a dense matrix might happen to have lots of zero values. In general, if size(S,1) ≈ size(S,2) then we would expect both row and column sums to be dense, so producing a dense vector for these operations seems sane to me.

SobhanMP commented 2 years ago

@Wimmerer do you think this would be a good feature to add?

ViralBShah commented 2 years ago

Yes, this is good to do - which is make the sum of SparseMatrixCSC along either dimension a SparseVector.

rayegun commented 2 years ago

I imagine this SparseVector will very often be dense but that's the only correct route.