DeustoTech / DyCon-toolbox

The dycon platform is a collection of common tools for the investigation of differential equations, in the context of the development of the Dycon project.
http://cmc.deusto.eus/dycon/
2 stars 2 forks source link

ClassicalDescent, #40

Closed DRuizB closed 5 years ago

DRuizB commented 5 years ago

I would change the name Classical, it seems that it has to work with this name, while it does not. HE MIRADO EL LIBRO QUE ME MANDASTE PERO NO HE VISTO ESTO DE LA APROXIMACIÓN. NO VEO PORQUÉ NO HACEMOS BIEN EL ADJUNTO. AHORA MISMO NO TENGO MUCHO MAS TIEMPO PARA MIRAR EN DETALLE EL LIBRO

ClassicalDescent

This method is used within the GradientMethod method. GradientMethod executes iteratively this rutine in order to get one update of the control in each iteration. In the case of choosing ClassicalDescent this function updates the control of the following way:

$$u{old}=u{new}-\alpha dJ$$

where dJ is an approximation of the gradient of J that has been obtained considering the adjoint state of the optimality condition of Pontryagin principle. The optimal control problem is defined by $$\min J=\min\Psi (t,Y(T))+\int^T_0 L(t,Y,U)dt$$

subject to:

$$\frac{d}{dt}Y=f(t,Y,U).$$

The gradient of $$J$$ is:

$$dJ=\partial_u H=\partial_uL+p\partial_uf$$

An approximation $$p$$ is computed using:

$$-\frac{d}{dt}p = f_Y (t,Y,U)p+L_Y(Y,U)$$

$$ p(T)=\psi_Y(Y(T))$$

Since one the expression of the gradient, we can start with an initial control, solve the adjoint problem and evaluate the gradient. Then we will update the initial control in the direction of the approximate gradient with a step size $$\alpha$$. In this routine the user has to choose the step size.

WARNING: Using this routine the GradientMethod might not converge if the stepsize is not choosen properly or being slow if the step size is choosen very small. For an adaptative stepsize with Armijo Rule guaranteeing the convergence see (adaptative stepsize).

This routine will tell to GradientMethod to stop when the minimum tolerance of the derivative (or the relative error, user's choice) is reached. Moreover there is a maximum of iterations allowed.

MANDATORY INPUTS:

NAME: iCP DESCRIPTION: Control problem object, it carries all the information about the dynamics, the functional to be minimized and moreover the updates of the current best control find so far. CLASS: ControlProblem

NAME: tol DESCRIPTION: the tolerance desired. CLASS: double

OPTIONAL INPUT PARAMETERS

NAME: LengthStep DESCRIPTION: This paramter is the length step of the gradient method that is going to be used. By default, this is 0.1. CLASS: double

OUTPUT PARAMETERS

Name: Unew Description: Update of the Control class: a vector valued function in a form of a double matrix

Name: Ynew Description: Update of State Vector class: a vector valued function in a form of a double matrix

Name: Jnew Description: New Value of functional class: double

Name: dJnew Description: New Value of gradient class: a vector valued function in a form of a double matrix

Name: error Description: Nthe error |dJ|/|U| or |dJ| depending on the choice of the user. class: double

Name: stop Description: if this parameter is true, the routine will tell to GradientMethod to stop class: logical

Citations:

[1] Cohen, William C.,Optimal control theory—an introduction, Donald E. Kirk, Prentice Hall, Inc., New York (1971), 452 poges. \$13.50,https://onlinelibrary.wiley.com/doi/abs/10.1002/aic.690170452