MadNLP / MadNLP.jl

A solver for nonlinear programming
MIT License
172 stars 15 forks source link

Unable to get GPU solver working #381

Open taDachs opened 1 day ago

taDachs commented 1 day ago

Hey, I tried following the quickstart for gpu solvers, but was unable to get the solver to run.

My code looks like this:

using JuMP
using MadNLPGPU
using MadNLP

# Constants
T_f = 10.0
S = π

N = 100
h = T_f / N

G = 9.81  # m/s^2
L = 1.0   # m

model = Model(()->MadNLP.Optimizer(linear_solver=MadNLPGPU.CUDSSSolver))

@variable(model, θ[1:N])
@variable(model, θ_dot[1:N])
@variable(model, u[1:N])

# Objective: Minimize control effort
@objective(model, Min, h * sum(u .^ 2))

@constraint(model, θ[2:end] == θ[1:end-1] + h * θ_dot[1:end-1])
@constraint(model, θ[2:end] == θ_dot[1:end-1] + h * (-G * sin.(θ[1:end-1]) / L + u[1:end-1]))

# Boundary conditions
@constraint(model, θ[1] == 0)
@constraint(model, θ_dot[1] == 0)
@constraint(model, θ[N] == S)
@constraint(model, θ_dot[N] == 0)

optimize!(model)

I get an error message which I don't really understand:

ERROR: LoadError: MethodError: no method matching MadNLPGPU.CUDSSSolver(::SparseArrays.SparseMatrixCSC{Float64, Int32}; opt::MadNLPGPU.CudssSolverOptions)
The type `MadNLPGPU.CUDSSSolver` exists, but no method is defined for this combination of argument types when trying to construct it.

Closest candidates are:
  MadNLPGPU.CUDSSSolver(::Union{Nothing, CUDSS.CudssSolver}, ::CUDA.CUSPARSE.CuSparseMatrixCSC{T}, ::CUDA.CuArray{T, 1}, ::CUDA.CuArray{T, 1}, ::MadNLPGPU.CudssSolverOptions, ::MadNLP.MadNLPLogger) where T got un
supported keyword argument "opt"
   @ MadNLPGPU ~/.julia/packages/MadNLPGPU/F7lAy/src/LinearSolvers/cudss.jl:13
  MadNLPGPU.CUDSSSolver(::CUDA.CUSPARSE.CuSparseMatrixCSC{T}; opt, logger) where T
   @ MadNLPGPU ~/.julia/packages/MadNLPGPU/F7lAy/src/LinearSolvers/cudss.jl:22

Stacktrace:
 [1] create_kkt_system(::Type{MadNLP.SparseKKTSystem}, cb::MadNLP.SparseCallback{Float64, Vector{Float64}, Vector{Int64}, MadNLPMOI.MOIModel{Float64}, MadNLP.MakeParameter{Vector{Float64}, Vector{Int64}}, MadNLP.
EnforceEquality}, ind_cons::@NamedTuple{ind_eq::Vector{Int64}, ind_ineq::Vector{Int64}, ind_fixed::Vector{Int64}, ind_lb::Vector{Int64}, ind_ub::Vector{Int64}, ind_llb::Vector{Int64}, ind_uub::Vector{Int64}}, lin
ear_solver::Type{MadNLPGPU.CUDSSSolver}; opt_linear_solver::MadNLPGPU.CudssSolverOptions, hessian_approximation::Type)
   @ MadNLP ~/.julia/packages/MadNLP/66k4O/src/KKT/Sparse/augmented.jl:128
 [2] MadNLPSolver(nlp::MadNLPMOI.MOIModel{Float64}; kwargs::@Kwargs{linear_solver::UnionAll})
   @ MadNLP ~/.julia/packages/MadNLP/66k4O/src/IPM/IPM.jl:155
 [3] optimize!(model::MadNLPMOI.Optimizer)
   @ MadNLPMOI ~/.julia/packages/MadNLP/66k4O/ext/MadNLPMOI/MadNLPMOI.jl:946
 [4] optimize!
   @ ~/.julia/packages/MathOptInterface/gLl4d/src/Bridges/bridge_optimizer.jl:367 [inlined]
 [5] optimize!
   @ ~/.julia/packages/MathOptInterface/gLl4d/src/MathOptInterface.jl:122 [inlined]
 [6] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MadNLPMOI.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float
64}}})
   @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/gLl4d/src/Utilities/cachingoptimizer.jl:321
 [7] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::@Kwargs{})
   @ JuMP ~/.julia/packages/JuMP/i68GU/src/optimizer_interface.jl:595
 [8] optimize!(model::Model)
   @ JuMP ~/.julia/packages/JuMP/i68GU/src/optimizer_interface.jl:546
 [9] top-level scope
   @ /home/hcr/ws/for_issue.jl:36

Is there a problem with how i setup my optimization problem? I couldn't find any documentation on the GPU solvers.

sshin23 commented 20 hours ago

Thanks for reporting this, @taDachs. This should be fixed. To use GPU feature, please try ExaModels: https://exanauts.github.io/ExaModels.jl/stable/guide/

If you want to use GPU features with JuMP, one option is using experimental JuMP interface, but this might be less stable/efficient https://exanauts.github.io/ExaModels.jl/stable/jump/