Closed tmigot closed 3 months ago
Is the complaint about H
or about cg_op
here?
I am not sure, need to investigate further. It is the product of both that fails, the product of H
by itself or cg_op
seems to work.
The error is probably due to this line:
https://github.com/JuliaSmoothOptimizers/LinearOperators.jl/blob/22cf605514369b52e70100501d0f92dd50fcccc6/src/operations.jl#L119
What are the storage stypes of H
and cg_op
?
Oh, okay, maybe their is an error in the Hessian operator... here are the storage types:
julia> LinearOperators.storage_type(cg_op)
CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}
julia> LinearOperators.storage_type(H)
Vector{Float64} (alias for Array{Float64, 1})
Yes, where is hess_op!
defined? I can't find it in ADNLPModels.
Hello, I have stumbled over the (I think) the same underlying issue or at least the same error. Here is a MWE:
julia> using CUDA, LinearOperators
julia> N = 8
8
julia> A = CUDA.rand(N, N);
julia> x = CUDA.rand(N);
julia> (A + opEye(Float32, N, S = typeof(x)));
ERROR: LinearOperatorException("storage types cannot be promoted to a concrete type")
Stacktrace:
[1] +(op1::LinearOperator{Float32, Int64, LinearOperators.var"#5#8"{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, LinearOperators.var"#6#9"{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, LinearOperators.var"#7#10"{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, Vector{Float32}}, op2::LinearOperator{Float32, Int64, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}})
@ LinearOperators ~/.julia/packages/LinearOperators/lK3j5/src/operations.jl:190
[2] +(M::CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, op::LinearOperator{Float32, Int64, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, LinearOperators.var"#135#136"{Int64}, CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}})
@ LinearOperators ~/.julia/packages/LinearOperators/lK3j5/src/operations.jl:196
[3] top-level scope
@ REPL[14]:1
[4] top-level scope
@ ~/.julia/packages/CUDA/htRwP/src/initialization.jl:206
In this example this method is called, which turns the CUDA array A
into a LinearOperator with the default storage_type of Vector{Float32}
.
If I do a workaround with A_workaround = LinearOperator(A, S = typeof(x))
, I get a linear operator which I can then use.
One possible solution would be to create a package extension for CUDA, which implements LinearOperator(A::CuArray{...})
and adds the appropriate storage type. If that sounds like an acceptable solution, I could provide a PR for this.
Best regards
EDIT: this uses https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl/pull/208 or later