lanl-ansi / Alpine.jl

A Julia/JuMP-based Global Optimization Solver for Non-convex Programs
https://lanl-ansi.github.io/Alpine.jl/latest/
Other
244 stars 39 forks source link

Bug in feasibility problem #126

Closed krishpat closed 5 years ago

krishpat commented 5 years ago

When I solve this feasibility NLP (given below) in Alpine, I get an error ERROR: MethodError: no method matching one(::Type{Any}). If I replace @objective to @NLobjective, I get a different error as follows: ERROR: MethodError: Cannotconvertan object of type Float64 to an object of type Expr

@variable(m, -1 <= x[1:2] <= 1)
@objective(m, Min, 1)
@NLconstraint(m, x[1]*x[2] == 1)

Though the feasibility NLP can be solved using different solvers like Ipopt/Juniper.jl as global optimum is not necessary, wanted to keep this bug posted.

harshangrjn commented 5 years ago

@krishpat This has been fixed! v0.1.10 has been released with the updates. Update Alpine and let me know if this solves this issue.

krishpat commented 5 years ago

@harshangrjn It works now! Thanks for the update. I was curious if Alpine still solves the problem to global optimality when the objective is a constant. Asking this since Alpine is performing a few lower bounding iterations. Here is the output I get for the above problem:

====================================================================================================
LOWER-BOUNDING ITERATIONS 

| Iter   | Incumbent       | Best Incumbent      | Lower Bound        | Gap (%)         | Time      
| 1      | -               | Inf                 | 1.0                | LARGE           | 0.02s            
| 2      | -               | Inf                 | 1.0                | LARGE           | 0.03s            
| finish | 1.0             | 1.0                 | 1.0                | 0.0             | 0.03s            
====================================================================================================
harshangrjn commented 5 years ago

@krishpat The reason you see the lower bounding iterations is because the Ipopt hadn't found a feasible solution in the initial solve. Hence it iterates by partitioning the domains of the variables until a feasible solution (Incumbent) is found and converges in the 3rd iteration. If Ipopt had converged to a feasible point in the initial solve, Alpine wouldn't partition variable domains.

krishpat commented 5 years ago

Got it! Thanks for the clarification.