JuliaSmoothOptimizers / LinearOperators.jl

Linear Operators for Julia
Other
149 stars 31 forks source link

Add LinearOperator constructor for CUDA #321

Closed nHackel closed 3 months ago

nHackel commented 3 months ago

This PR adds a method for the LinearOperator function which has a correct default storage type for CUDA's CuArray.

This should (partially, it does not fix the upstream error in NLPModels.jl) fix #307, in particular this error is resolved:

julia> using CUDA, LinearOperators

julia> N = 8
8

julia> A = CUDA.rand(N, N);

julia> x = CUDA.rand(N);

julia> (A + opEye(Float32, N, S = typeof(x)));
ERROR: LinearOperatorException("storage types cannot be promoted to a concrete type")
Stacktrace:
 [1] +(op1::LinearOperator{Float32, Int64, LinearOperators.var"#5#8"{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, LinearOperators.var"#6#9"{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, LinearOperators.var"#7#10"{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, Vector{Float32}}, op2::LinearOperator{Float32, Int64, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}})
   @ LinearOperators ~/.julia/packages/LinearOperators/lK3j5/src/operations.jl:190
 [2] +(M::CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, op::LinearOperator{Float32, Int64, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}})
   @ LinearOperators ~/.julia/packages/LinearOperators/lK3j5/src/operations.jl:196
 [3] top-level scope
   @ REPL[14]:1
 [4] top-level scope
   @ ~/.julia/packages/CUDA/htRwP/src/initialization.jl:206

The method is defined in a package extension for CUDA. I've also tried to make it available for older Julia Version with Requires, similar to how it was done for the ChainRulesExt.

nHackel commented 3 months ago

Another option could be to try to write an extension for AbstractGPUArrays, however this will require some workarounds to strip the Array type of its parameters.

One way might be with the workaround shown here or maybe with a typeof(similar(...)). If such a workaround but generically for different GPUs sounds fine, I can close this PR and try it in a new one

github-actions[bot] commented 3 months ago
Package name latest stable
CaNNOLeS.jl
DCISolver.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
Krylov.jl
NLPModels.jl
NLPModelsModifiers.jl
PROPACK.jl
Percival.jl
QuadraticModels.jl
SolverTools.jl
geoffroyleconte commented 3 months ago

Hi @nHackel , thank you for this PR!

I was wondering if something like

A = CUDA.rand(10,10)
tA = typeof(A)
S = tA.name.wrapper{eltype(A), 1, tA.parameters[3:end]...}
test_vec = S(undef, 10)

would work for any matrix type (including other GPUArrays, but I can only test with CUDA) to avoid having CUDA in the extensions and solve your issue? However it is not very pretty and adds a bit of allocations so I'm not sure it is the best solution:

julia> @allocated tA.parameters[3:end]
128

I think that

S = tA.name.wrapper{eltype(A), 1, tA.parameters[3]}

might work at least for CUDA but probably not for CPU arrays.

Let me know if you don't see an obvious answer, we can probably merge this PR and open an issue to improve genericity in the future.

nHackel commented 3 months ago

Hello @geoffroyleconte, thanks for looking into the PR!

I thought about the workarounds a bit more and I fear one issue we would run into if we try to make it very generic (either just for GPU or even all arrays) are all the various matrix types where the correct storage/result type is not directly a 1-dimensional version of it.

For example a sparse matrix would result in a dense vector in the end. While one could solve that with a new case for AbstractSparseMatrix, the same thing could happen for any number of array types we don't know about.

nHackel commented 3 months ago

I thought about adding the method storage_type(M::CuMatrix{T}) where {T} = CuVector{T} to the PR just now. Then I had the idea that we could then do the following and remove the constructor I defined here so far:


function LinearOperator(
  M::AbstractMatrix{T};
  symmetric = false,
  hermitian = false,
  S = storage_type(M) # previously Vector{T},

for "all" arrays here.

That change would be a bit more invasive, since it affects the fallback for all matrix types. Let me know what you think, I can either just add the storage_type for CuArray or the invasive change

geoffroyleconte commented 3 months ago

Yes that's a great idea! Adding new Array types would then require few lines of code.