GAMS-dev / gams.jl

A MathOptInterface Optimizer to solve JuMP models using GAMS
MIT License
34 stars 3 forks source link

[Q]: which are the benefits of gams.jl over the standard solver calling? #1

Closed yordiak closed 4 years ago

yordiak commented 4 years ago

hello,

it is not clear from the description which are the benefit of using this package over calling the solver directly. For instance, the transportation problem in the example can be solved by replacing

model = Model(GAMS.Optimizer)
   set_optimizer_attribute(model, MOI.Silent(), !verbose)

with model=Model(Gurobi.Optimizer)

given that, i have a licensed version of Gurobi.

So, which are the benefits of the former over the latter approach

renkekuhlmann commented 4 years ago

Great question. Honest answer from a technical perspective: I personally don’t see a benefit for this particular use case (solving LPs using Gurobi). Even worse, you will experience a minimal time overhead by using Gurobi through GAMS on a JuMP model in comparison to the direct Julia interface. Due to legal reasons, rather than filling internal GAMS data structures directly in the MathOptInterface optimizer, GAMS.jl exports the JuMP data as a .gms file and calls GAMS, which then exports the result as .gdx. Finally, GAMS.jl imports this .gdx file. While the GDX import/export is super-efficient, exporting the .gms file generates an overhead that grows with the problem size (mainly dependent on the number of non-zeros). However, in most practical cases we believe this overhead to be very acceptable. Note, that I added the transport problem mainly because it is referred to a lot in the GAMS documentation (e.g. this tutorial) and GAMS users will certainly recognize it.

The situation definitely changes for nonlinear programming though. First, there are a bunch of nonlinear programming solvers that do not have a native Julia/JuMP interface. Second, there are user requests in the JuMP community for accessing nonlinear solvers that are only accessible through GAMS (see https://discourse.julialang.org/t/using-antigone-from-julia/10917). Third, in the JuMP documentation (https://jump.dev/JuMP.jl/v0.20/nlp/#Factors-affecting-solution-time-1) it says:

The function evaluation time, on the other hand, is the responsibility of the modeling language. JuMP computes derivatives by using reverse-mode automatic differentiation with graph coloring methods for exploiting sparsity of the Hessian matrix [1]. As a conservative bound, JuMP's performance here currently may be expected to be within a factor of 5 of AMPL's.

While I find it a bit hard to give reliable statistics on this matter, we see a similar tendency in the case of GAMS. When function evaluations are dominant, this circumstance may result in better solution performance when using, for example, GAMS.jl rather than a native solver interface (even with the overhead described above). Finally, using GAMS can be more convenient. While simply doing

using Pkg
Pkg.add("Ipopt")
using Ipopt, JuMP
model = Model(Ipopt.Optimizer)

for example probably doesn’t give you the best Ipopt experience (mainly due to the default BLAS and linear solver), a

using Pkg
Pkg.add("GAMS")
using GAMS, JuMP
model = Model(GAMS.Optimizer)
set_optimizer_attribute(model, GAMS.Solver(), "ipopth") 

can do.

In addition, there may be other considerations like technical support from a commercial company, free community licenses or discounts on solver packages (see section "Package Discounts" in price lists) that can result in a benefit when using GAMS.jl. But again, I doubt this is true for you, because you are using Gurobi (for which you have a license).

yordiak commented 4 years ago

@renkekuhlmann Thank you very much for the detailed explanation. I just mentioned Gurobi as an example (actually, I also have a license for the Xpress solver, which it is more efficient for solving lp problems than Gurobi).