mechmotum / cyipopt

Cython interface for the interior point optimzer IPOPT
Eclipse Public License 2.0
227 stars 54 forks source link

Cyipopt gives weird solution on simple optimization problem #232

Closed GuanJinNL closed 11 months ago

GuanJinNL commented 11 months ago

Dear cyipopt community,

I have recently started using ipopt through python and I conda installed the cyipopt package as well as the ipopt solver through conda forge.

I copy-pasted the example code on cyipopt documentation page and it worked fine. However, when I tried a simple optimization problem, the solver gives weird solution. The simple problem that I tried was: minimize x[0] + x[1], subject to x[0]-x[1]<= 0, x[0]>=0. Obviously, the solution to this problem should be 0.

I defined the problem in the same way as suggested on the cyipopt page:

class HS073(cyipopt.Problem):

def objective(self, x):
    """Returns the scalar value of the objective given x."""
     return(x[0]+x[1])

def gradient(self, x):
    """Returns the gradient of the objective with respect to x."""
    return(np.array([1,1]))

def constraints(self, x):
    """Returns the constraints."""
    return(np.array([x[0]-x[1]]))

def jacobian(self, x):
    """Returns the Jacobian of the constraints with respect to x."""
    return(np.array([1,-1]))

def intermediate(self, alg_mod, iter_count, obj_value, inf_pr, inf_du, mu,
                 d_norm, regularization_size, alpha_du, alpha_pr,
                 ls_trials):
    """Prints information at every Ipopt iteration."""
    iterate = self.get_current_iterate()
    infeas = self.get_current_violations()
    primal = iterate["x"]
    jac = self.jacobian(primal)
    msg = "Objective value at iteration #{:d} is - {:g}"

    print(msg.format(iter_count, obj_value))

    print("Iteration:", iter_count)
    print("Primal iterate:", primal)
    print("Flattened Jacobian:", jac)
    print("Dual infeasibility:", infeas["grad_lag_x"])

Then, I added the bounds and formulate the problem:

lb = [0,0] ub = [2e10,2e10] cl = [-2e10] cu = [0] x0 = [1,2]

nlp = HS073( n=len(x0), m=len(cl), lb=lb, ub=ub, cl=cl, cu=cu, )

Finally, I solve the problem:

nlp.add_option('mu_strategy', 'adaptive') nlp.add_option('tol', 1e-10) x, info = nlp.solve(x0)

This is the output. After 1 iteration, the solution is not making sense:

Objective value at iteration #0 is - 3 Iteration: 0 Primal iterate: [1. 2.] Flattened Jacobian: [ 1 -1] Dual infeasibility: [1. 1.] Objective value at iteration #1 is - 9232.62 Iteration: 1 Primal iterate: [2308.15382761 6924.4615058 ] Flattened Jacobian: [ 1 -1] Dual infeasibility: [-3807.1261665 -5330.17863752] Objective value at iteration #2 is - 32831.6 Iteration: 2 Primal iterate: [ 9283.95016194 23547.67587445] Flattened Jacobian: [ 1 -1] Dual infeasibility: [-10191.60234577 -13407.40835721] Objective value at iteration #3 is - 34372.2

The iterations continued and reached 3000 steps and the final answer is this:

Objective value at iteration #3000 is - 28980.6 Iteration: 3000 Primal iterate: [ 6848.26950247 22132.29693556] Flattened Jacobian: [ 1 -1] Dual infeasibility: [0.99999354 0.99999283]

What is going on? I don't see any mistakes in my code. Could it be this simple problem is not suitable for ipopt? Or are there any bugs in cyipopt?

I hope someone can help me with this confusion. Helps are appreciated!

brocksam commented 11 months ago

Your sample problem here is a linear program. Ipopt is a nonlinear solver, so it's not going to be best suited to solving LPs.

The fix here is to supply the exact hessian for Ipopt, otherwise Ipopt will produce an (inaccurate) BFGS hessian approximation. The exact hessian is trivially all zeros for a LP:

    def hessian(self, x, lagrange, obj_factor):
        return np.array([0, 0, 0])

As for explaining what's going on, I'm hypothesising here. Even though your example seems trivial from a human's perspective, from Ipopt's algorithmic perspective, this may actually be a pretty hard problem to solve because it requires the decision variables to be found at their lower bound, and the constraint to be satisfied at its upper bound.

GuanJinNL commented 11 months ago

Hi brocksam,

Many thanks for your answer! After I added the Hessian, it indeed converged!

The real problem that I want to solve has a non-linear objective function, but linear constraints. Do you know if linear constraints will also cause problems in ipopt?

brocksam commented 11 months ago

Do you know if linear constraints will also cause problems in ipopt?

Ipopt can solve LPs, just less efficiently than a dedicated LP solver. Linear constraints shouldn't be a problem and they will make supplying the exact hessian easier because it will only consist of the objective-related terms.

I'll close this now as it feels resolved. Feel free to reopen if you still have related questions.