artofscience / SAOR

Sequential Approximate Optimization Repository
GNU General Public License v3.0
5 stars 1 forks source link

Wrapper function `optimize(...)` #85

Closed Giannis1993 closed 2 years ago

Giannis1993 commented 2 years ago

This is an issue to discuss the structure of this wrapper function. I think there could be some improvements on the initilializations/imports, but I did not want to use from somewhere import * which would make things simpler. My proposal would be to implement something as follows:

def optimize(problem, solver, approximation, criterion, *args, **kwargs):
    """
    This is a wrapper function for the main file of an optimization.
    Takes as arguments the following objects and performs the optimization main loop.

    :param problem: An object that holds the initial problem to be solved.
    :param solver: An object that holds the solver to be used.
    :param approximation: An object that holds the approximation (and the intervening vars) to be used, e.g. Taylor1(Linear())
    :param criterion: An object that holds the convergence criterion.
    :param args:
    :param kwargs:
    :return:
    """

    logger.info("Solving test_poly using y=MMA and solver=Ipopt Svanberg")

    # Instantiate the subproblem       # TODO: improve imports (didn't want to use import *)
    subproblem = sao.problems.subproblem.Subproblem(approximation=approximation)
    subproblem.set_limits([sao.move_limits.Bounds(prob.xmin, prob.xmax),
                           sao.move_limits.MoveLimit(move_limit=0.1, dx=prob.xmax - prob.xmin)])

    # Initialize design and iteration counter
    x_k = kwargs.get('x0', prob.x0)
    itte = 0

    # Optionally, instantiate plotter           # TODO: Change the 'criterion' to f'{criterion.__class__.__name__}'
    plot_flag = kwargs.get('plot', False)
    if plot_flag:
        plotter = sao.util.Plot(['objective', 'constraint', 'criterion', 'max_constr_violation'], path=".")

    # Optimization loop
    while not criterion.converged:

        # Evaluate responses and sensitivities at current point, i.e. g(X^(k)), dg(X^(k)), ddg(X^(k))
        f = problem.g(x_k)
        df = problem.dg(x_k)
        ddf = (prob.ddg(x_k) if subproblem.approx.__class__.__name__ == 'Taylor2' else None)

        # Build approximate sub-problem at X^(k)
        subproblem.build(x_k, f, df, ddf)

        # Call solver (x_k, g and dg are within approx instance)
        x_k, y, z, lam, xsi, eta, mu, zet, s = solver.subsolv(subproblem)

        # Print & Plot              # TODO: Print and Plot the criterion as criterion.value (where 0 is now)
        logger.info(
            'iter: {:^4d}  |  x: {:<10s}  |  obj: {:^9.3f}  |  criterion: {:^6.3f}  |  max_constr_viol: {:^6.3f}'.format(
                itte, np.array2string(x_k[0]), f[0], 0, max(0, max(f[1:]))))

        if plot_flag:
            plotter.plot([f[0], f[1], 0, max(0, max(f[1:]))])        # TODO: Add functionality to (optionally) plot criterion.value

        itte += 1

    logger.info('Optimization loop converged!')

and would be used as:

def main():
    """
    This is an example where the optimizer wrapper function is used.
    The result should be equivalent to that of `example_polynomial_2D().`

    :return:
    """

    # Instantiate problem, solver, approximation and convergence criterion
    problem = Polynomial2D()
    solver = SvanbergIP(prob.n, prob.m)
    approximation = Taylor2(Linear())
    x_k = np.array([2, 1.5])
    criterion = VariableChange(x_k)

    # Call wrapper function that includes the main optimization loop
    optimize(problem, solver, approximation, criterion, x0=x_k, plot_flag=True)
MaxvdKolk commented 2 years ago

Looks good! Some remarks:

Giannis1993 commented 2 years ago

Implemented Max's feedback and added it to #88. Closing this issue.