Open aufdenkampe opened 1 year ago
@sjordan29, if this enables us to more easily leverage Dask, then it may be connected to
That said, there really does appear to be substantially less development activity at https://github.com/pydata/sparse compared to https://github.com/scipy/scipy, especially given that scipy v1.11 introduced "A new public base class scipy.sparse.sparray was introduced, allowing further extension of the sparse array API.". See https://docs.scipy.org/doc/scipy/release/1.11.0-notes.html#scipy-sparse-improvements
Given that Dask implements numpy.ndarray
under the hood, then it may be that Dask would work perfectly well with the new scipy.sparse.sparray
.
This person is wondering the same thing: https://dask.discourse.group/t/confused-about-working-with-sparse-arrays/1762
Quick update: pyData/sparse appears to be re-activated, with the release of v0.15 in Jan 2024, https://sparse.pydata.org/en/stable/changelog.html. A v0.16 has gotten a bunch of alpha releases since April: https://github.com/pydata/sparse/tags. Last, the xarray community survey just asked if they should prioritize work on sparse.
Meanwhile, scipy 1.14 also improves: "scipy.sparse.linalg.spsolve_triangular is now more than an order of magnitude faster in many cases."
It's still unclear which is the best library at this moment.
It appears that the PyData/sparse library more natively integrates with Xarray and Dask than the scipy.sparce subpackage, because it was designed from the ground up to natively use arrays instead of matrices. See:
It's possible that this is being solved by the recent refactoring of scipy.sparce, but Xarray and Dask are already integrated with PyData/sparse.
It might be worth exploring if migrating to PyData/sparse would provide any benefits to performance or code simplicity.
On the other hand, development doesn't appear very active, so it's possible that momentum has shifted away from this effort. See: https://sparse.pydata.org/en/stable/changelog.html