Closed ChrisRackauckas closed 2 months ago
Attention: Patch coverage is 51.85185%
with 39 lines
in your changes are missing coverage. Please review.
Project coverage is 62.61%. Comparing base (
d050e01
) to head (084d0d6
).
Files | Patch % | Lines |
---|---|---|
ext/LinearSolveCUDAExt.jl | 0.00% | 27 Missing :warning: |
src/LinearSolve.jl | 78.43% | 11 Missing :warning: |
ext/LinearSolveCUDSSExt.jl | 0.00% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
This is hard to test, but it is now working with CUDSS and I can check the symbolic factorizations are reused! MWE:
using CUDA, CUDA.CUSPARSE
using CUDSS
using SparseArrays, LinearAlgebra
using LinearSolve
T = Float64
n = 100
A_cpu = sprand(T, n, n, 0.05) + I
x_cpu = zeros(T, n)
b_cpu = rand(T, n)
A_gpu = CuSparseMatrixCSR(A_cpu)
x_gpu = CuVector(x_cpu)
b_gpu = CuVector(b_cpu)
prob = LinearProblem(A_gpu, b_gpu)
sol = solve(prob, LUFactorization())
cache = init(prob, LUFactorization())
solve!(cache)
cache.b = CuVector(rand(T, n))
solve!(cache)
cache.A = CuSparseMatrixCSR(A_cpu + 3I)
solve!(cache)
cache.b = CuVector(rand(T, n))
solve!(cache)
This is with https://github.com/JuliaArrays/ArrayInterface.jl/pull/433 and https://github.com/exanauts/CUDSS.jl/pull/33
This should be sufficient for CUDSS.jl to be optimally used as well