google / or-tools

Google's Operations Research tools:
https://developers.google.com/optimization/
Apache License 2.0
10.97k stars 2.1k forks source link

Nonlinear and Mixed-Integer Nonlinear Programming Support #582

Closed bacalfa closed 6 years ago

bacalfa commented 6 years ago

Is it possible to implement (MI)NLP models? If not, are there plans to support NLP and MINLP models? There are a few state-of-the-art solvers for such models (IPOPT, KNITRO, CONOPT, BARON).

SCIP is supported and can solve (MI)NLP models, but how do I define a model that isn't (MI)LP, since I'll be adding nonlinear expressions?

sschnug commented 6 years ago

(Never used this lib apart from tiny examples)

A quick search looks like there is no code doing automatic-differentiation, which is absolutely needed for those solvers (if you don't provide jacobians/hessians by hand). Alternatives for that like mathematical modelling-languages seem to be unavailable too (e.g. AMPL).

In my opinion (only?) this will need a lot of additional code!

bacalfa commented 6 years ago

I agree that AD is essential. There's Pyomo.

sschnug commented 6 years ago

Pyomo's AD comes from AMPL's ASL/MP which is imho license-wise not compatible to Apache 2.0 (at least not in terms of: shipping it). Okay... it seems it would be.

jiadongwang commented 6 years ago

https://projects.coin-or.org/ADOL-C is not what you are looking for ?:)

On Mon, Feb 12, 2018 at 11:29 AM, sschnug notifications@github.com wrote:

Pyomo's AD comes from AMPL's ASL/MP which is imho license-wise not compatible to Apache 2.0.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/582#issuecomment-364998294, or mute the thread https://github.com/notifications/unsubscribe-auth/AFyskWIbB8d_-oopRqEowASJOIEaKaRoks5tUHULgaJpZM4SCcuW .

-- Jiadong Wang Operations Research@Sabre Tel: 484-707-8895

bacalfa commented 6 years ago

What about autograd?

import autograd.numpy as np
from autograd import grad, hessian
from scipy.optimize import minimize

def rosenbrock(x):
    return 100*(x[1] - x[0]**2)**2 + (1 - x[0])**2

rosenbrock_grad = grad(rosenbrock)
rosenbrock_hess = hessian(rosenbrock)

result = minimize(rosenbrock, x0=np.array([0.0, 0.0]), jac=rosenbrock_grad, hess=rosenbrock_hess, method="trust-exact")
print(result)
     fun: 1.1524542070786494e-13
    hess: array([[ 801.99967846, -399.99991693],
       [-399.99991693,  200.        ]])
     jac: array([  1.03264509e-05,  -5.37090170e-06])
 message: 'Optimization terminated successfully.'
    nfev: 18
    nhev: 18
     nit: 17
    njev: 15
  status: 0
 success: True
       x: array([ 0.99999979,  0.99999956])
bacalfa commented 6 years ago

And here's a combination of autograd and IPOPT (using pyipopt), which I compiled from source on a Mac:

import autograd.numpy as np
from autograd import grad, hessian
import pyipopt

def rosenbrock(x):
    return 100*(x[1] - x[0]**2)**2 + (1 - x[0])**2

rosenbrock_grad = grad(rosenbrock)
rosenbrock_hess = hessian(rosenbrock)

x0 = np.array([0.0, 0.0])
pyipopt.set_loglevel(0)
result = pyipopt.fmin_unconstrained(rosenbrock, x0, fprime=rosenbrock_grad, fhess=rosenbrock_hess)

print(result)
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************

This is Ipopt version 3.12.9, running with linear solver ma27.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        3

Total number of variables............................:        2
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.0000000e+00 0.00e+00 2.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  9.5312500e-01 0.00e+00 1.25e+01  -1.0 1.00e+00    -  1.00e+00 2.50e-01f  3
   2  4.8320569e-01 0.00e+00 1.01e+00  -1.0 9.03e-02    -  1.00e+00 1.00e+00f  1
   3  4.5708829e-01 0.00e+00 9.53e+00  -1.0 4.29e-01    -  1.00e+00 5.00e-01f  2
   4  1.8894205e-01 0.00e+00 4.15e-01  -1.0 9.51e-02    -  1.00e+00 1.00e+00f  1
   5  1.3918726e-01 0.00e+00 6.51e+00  -1.7 3.49e-01    -  1.00e+00 5.00e-01f  2
   6  5.4940990e-02 0.00e+00 4.51e-01  -1.7 9.29e-02    -  1.00e+00 1.00e+00f  1
   7  2.9144630e-02 0.00e+00 2.27e+00  -1.7 2.49e-01    -  1.00e+00 5.00e-01f  2
   8  9.8586451e-03 0.00e+00 1.15e+00  -1.7 1.10e-01    -  1.00e+00 1.00e+00f  1
   9  2.3237475e-03 0.00e+00 1.00e+00  -1.7 1.00e-01    -  1.00e+00 1.00e+00f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  2.3797236e-04 0.00e+00 2.19e-01  -1.7 5.09e-02    -  1.00e+00 1.00e+00f  1
  11  4.9267371e-06 0.00e+00 5.95e-02  -1.7 2.53e-02    -  1.00e+00 1.00e+00f  1
  12  2.8189505e-09 0.00e+00 8.31e-04  -2.5 3.20e-03    -  1.00e+00 1.00e+00f  1
  13  1.0095040e-15 0.00e+00 8.68e-07  -5.7 9.78e-05    -  1.00e+00 1.00e+00f  1
  14  1.3288608e-28 0.00e+00 2.02e-13  -8.6 4.65e-08    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 14

                                   (scaled)                 (unscaled)
Objective...............:   1.3288608467480825e-28    1.3288608467480825e-28
Dual infeasibility......:   2.0183854587685121e-13    2.0183854587685121e-13
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   0.0000000000000000e+00    0.0000000000000000e+00
Overall NLP error.......:   2.0183854587685121e-13    2.0183854587685121e-13

Number of objective function evaluations             = 36
Number of objective gradient evaluations             = 15
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 14
Total CPU secs in IPOPT (w/o function evaluations)   =      0.005
Total CPU secs in NLP function evaluations           =      0.023

EXIT: Optimal Solution Found.
(array([ 1.,  1.]), array([ 0.,  0.]), array([ 0.,  0.]), array([], dtype=float64), 1.3288608467480825e-28, 0)
bacalfa commented 6 years ago

I recognize the above is only for Python and more a general approach (C/C++) is required.

samiit commented 6 years ago

Thanks for the combination of autograd and IPOPT (using pyipopt).

By the way, I was using CasADi so far which is quite good; but when I wanted to package my Python code into an executable, it was not all that easy. But with your combination I guess, it would be much easier to package things.

Sam

lperron commented 6 years ago

We have no plan to support non linear math optimization.