pydata / sparse

Sparse multi-dimensional arrays for the PyData ecosystem
https://sparse.pydata.org
BSD 3-Clause "New" or "Revised" License
583 stars 125 forks source link

Fast conversion to scipy.sparse.csr/csc_matrix #9

Open mrocklin opened 7 years ago

mrocklin commented 7 years ago

In computations involving tensordot we can be bound by routines to convert from sparse.COO to scipy.sparse.coo_matrix (this should be free) and to convert from scipy.sparse.coo_matrix to scipy.sparse.csc_matrix. I do not believe that these are currently running at optimal speed.

I'm currently getting around some of this by caching the csr matrix on the sparse.COO matrix, but this is a bit kludgy and doesn't help if we're doing reshapings, transposes, etc..

Instead, given the importance of this operation, it might be worth defining a single step computation from a 2D sparse.COO matrix without duplicates to scipy.sparse.csr/csc matrices.

jakevdp commented 7 years ago

Hmm... what that effectively involves is an efficient sorted group-by in the row numbers. Are you certain that scipy is doing that suboptimally, or is that just inherently a slow operation?

jakevdp commented 7 years ago

Actually, I take that back. Assuming no duplicates, you could do it as a O[N log N] lexsort on the row & column indices, followed by an O[N] pass to construct the indptr array. It would be like 4-5 lines of numpy :smile: Would be worth benchmarking.

mrocklin commented 7 years ago

scipy.sparse also spends about half of its time checking consistency. If we're able to guarantee things externally it would be nice to skip this.

mrocklin commented 7 years ago

What if we are sorted already? In my current application (which I think is representative of a few applications), I want to perform tocsr-then-matmul computations repeatedly on the same data throughout the computation. The data has already been normalized with sum_duplicates and so may already be in some predictable order. When I was thinking about this briefly before I was curious if it was doable in two linear passes over the data.

jakevdp commented 7 years ago

Here's some code for fast conversion to csr using numpy routines; csc would be similar:

import numpy a snp
from scipy.sparse import csr_matrix

def to_csr(self):
    assert self.ndim == 2

    # Pass 1: sum duplicates
    self.sum_duplicates()

    # Pass 2: sort indices
    order = np.lexsort(self.coords[::-1])
    row, col = self.coords[:, order]
    data = self.data[order]

    # Pass 3: count nonzeros in each row
    indptr = np.zeros(self.shape[0] + 1, dtype=row.dtype)
    np.cumsum(np.bincount(row, minlength=self.shape[0]), out=indptr[1:])

    return csr_matrix((data, col, indptr), shape=self.shape)

It basically takes three passes through the data: (1) summing the duplicates, (2) sorting the indices, and (3) counting the nonzeros within each row. If you know that any of those conditions is already met, you could skip that step and speed things up.

jakevdp commented 7 years ago

Actually, if the duplicates are pre-summed and the indices pre-sorted, then you could cache the cumulative count of nonzeros (i.e. indptr) and get CSR conversion basically for free!

mrocklin commented 7 years ago

OK, so it sounds like it would be useful to track metadata about duplicates, sorted order, and nonzero-counts (hrm, how does this generalize?). Which operations require which attributes? Which operations maintain these attributes?

For example most operations other than add maintain no_duplicates. I think that reshape maintains sorted order although I'm not sure. Transpose kills sorted order. etc..