JuliaSparse / SuiteSparseGraphBLAS.jl

Sparse, General Linear Algebra for Graphs!
MIT License
102 stars 17 forks source link

Integration with CUDA.jl #44

Open yuehhua opened 3 years ago

yuehhua commented 3 years ago

Thank you for implementation of high performance sparse representation for graph. I am curious about the roadmap of this repo. Would it provide integration with CUDA.jl? I want to integrate this repo as the low-level operations for GeometricFlux.jl and gain high performance FluxML/GeometricFlux.jl#213. Currently, deep learning community is used to train models on GPU. I am wondering the support to CUSPARSE. It will be good to have it as future work.

rayegun commented 3 years ago

So at the moment GPU support is on the horizon but not available yet (see my comments in the other issue).

I'm asking in the gpu slack about integration with CUDA.jl right now. Both options for CUDA support will be C/C++ libraries, until one day we get a native Julia GraphBLAS implementation which could use CUDA.jl directly. If integrating with CUDA.jl is possible then I'll definitely pursue that, if not they'll be "separate" libraries, but I imagine the arrays could still be passed between CUDA.jl and the C libraries.

CUSPARSE (at least the linalg part) is superseded by GraphBLAS in some sense, and I believe all of the kernels in SuiteSparse:GraphBLAS GPU and maybe GraphBLAST will be faster than CUSPARSE.