The dycon platform is a collection of common tools for the investigation of differential equations, in the context of the development of the Dycon project.
HE MIRADO EL CODIGO Y NO VEO DÓNDE SE TOMA LA DIRECCIÓN CONJUGADA NO LINEAL. EN EL CODIGO SOLO SE OPTIMIZA EN LA LONGITUD DE PASO PERO NO SE TOMA UNA DIRECCIÓN DIFERENTE A LA DEL GRADIENTE NO? EN MAYÚSCULAS MAS ABAJO HE ESCRITO EL MISMO COMENTARIO DÓNDE TENIA QUE EXPLICAR LA DIRECCIÓN DE DESCENSO
ConjugateGradientDescent
This method is used within the GradientMethod method. GradientMethod executes iteratively this rutine in order to get one update of the control in each iteration. In the case of choosing ConjugateGradientDescent this function updates the control of the following way:
$$u{old}=u{new}-\alpha_k s_k$$
where $$s_k$$ is the descent direction.
The optimal control problem is defined by
$$\min J=\min\Psi (t,Y(T))+\int^T_0 L(t,Y,U)dt$$
subject to:
$$\frac{d}{dt}Y=f(t,Y,U).$$
The gradient of $$J$$ is:
$$dJ=\partial_u H=\partial_uL+p\partial_uf$$
An approximation $$p$$ is computed using:
$$-\frac{d}{dt}p = f_Y (t,Y,U)p+L_Y(Y,U)$$
$$ p(T)=\psi_Y(Y(T))$$
Since one the expression of the gradient, we can start with an initial control, solve the adjoint problem and evaluate the gradient. Then one updates the initial control in the direction of the approximate gradient with a step size $$\alpha_k$$. $$\alpha_k$$ is determined by trying to solve numerically the following :
where $$s_k$$ is choosen using the gradient of $$J$$
WARNING: I HAVE NOT SEEN IN THE CODE IN WHICH WAY DO YOU CHOOSE THE SEARCH DIRECTION. IT SEEMS THAT YOU TAKE THE GRADIENT RIGHT?????? USUALLY THERE ARE SEVERAL CHOICES DIFFERENT THAN THE GRADIENT DIRECTION. [WIKIPEDIA NON-LINEAR CONJUGATE GRADIENT]
This routine will tell to GradientMethod to stop when the minimum tolerance of the derivative (or the relative error, user's choice) is reached. Moreover there is a maximum of iterations allowed.
MANDATORY INPUTS:
NAME: iCP
DESCRIPTION: Control problem object, it carries all the information about the dynamics, the functional to be minimized and moreover the updates of the current best control find so far.
CLASS: ControlProblem
NAME: tol
DESCRIPTION: the tolerance desired.
CLASS: double
OPTIONAL INPUT PARAMETERS
NAME: InitialLengthStep
DESCRIPTION: This paramter is the length step of the gradient method that is going to be used at the begining of the process. By default, this is 0.1.
CLASS: double
NAME: MinLengthStep
DESCRIPTION: This paramter is the lower bound on the length step of the gradient method if the algorithm needs to have a step size lower than this size it will make the GradientMethod stop.
CLASS: double
OUTPUT PARAMETERS:
All the updates will be carried inside the iCP control problem object.
Name: Unew
Description: Update of the Control
class: a vector valued function in a form of a double matrix
Name: Ynew
Description: Update of State Vector
class: a vector valued function in a form of a double matrix
Name: Jnew
Description: New Value of functional
class: double
Name: dJnew
Description: New Value of gradient
class: a vector valued function in a form of a double matrix
Name: error
Description: Nthe error |dJ|/|U| or |dJ| depending on the choice of the user.
class: double
Name: stop
Description: if this parameter is true, the routine will tell to GradientMethod to stop
class: logical
HE MIRADO EL CODIGO Y NO VEO DÓNDE SE TOMA LA DIRECCIÓN CONJUGADA NO LINEAL. EN EL CODIGO SOLO SE OPTIMIZA EN LA LONGITUD DE PASO PERO NO SE TOMA UNA DIRECCIÓN DIFERENTE A LA DEL GRADIENTE NO? EN MAYÚSCULAS MAS ABAJO HE ESCRITO EL MISMO COMENTARIO DÓNDE TENIA QUE EXPLICAR LA DIRECCIÓN DE DESCENSO
ConjugateGradientDescent
This method is used within the GradientMethod method. GradientMethod executes iteratively this rutine in order to get one update of the control in each iteration. In the case of choosing ConjugateGradientDescent this function updates the control of the following way:
$$u{old}=u{new}-\alpha_k s_k$$
where $$s_k$$ is the descent direction. The optimal control problem is defined by $$\min J=\min\Psi (t,Y(T))+\int^T_0 L(t,Y,U)dt$$
subject to:
$$\frac{d}{dt}Y=f(t,Y,U).$$
The gradient of $$J$$ is:
$$dJ=\partial_u H=\partial_uL+p\partial_uf$$
An approximation $$p$$ is computed using:
$$-\frac{d}{dt}p = f_Y (t,Y,U)p+L_Y(Y,U)$$
$$ p(T)=\psi_Y(Y(T))$$
Since one the expression of the gradient, we can start with an initial control, solve the adjoint problem and evaluate the gradient. Then one updates the initial control in the direction of the approximate gradient with a step size $$\alpha_k$$. $$\alpha_k$$ is determined by trying to solve numerically the following :
$$\operatorname{argmin}_{\alpha_k}J(y_k,u_k-\alpha_k s_k)$$
where $$s_k$$ is choosen using the gradient of $$J$$
WARNING: I HAVE NOT SEEN IN THE CODE IN WHICH WAY DO YOU CHOOSE THE SEARCH DIRECTION. IT SEEMS THAT YOU TAKE THE GRADIENT RIGHT?????? USUALLY THERE ARE SEVERAL CHOICES DIFFERENT THAN THE GRADIENT DIRECTION. [WIKIPEDIA NON-LINEAR CONJUGATE GRADIENT]
This routine will tell to GradientMethod to stop when the minimum tolerance of the derivative (or the relative error, user's choice) is reached. Moreover there is a maximum of iterations allowed.
MANDATORY INPUTS:
NAME: iCP DESCRIPTION: Control problem object, it carries all the information about the dynamics, the functional to be minimized and moreover the updates of the current best control find so far. CLASS: ControlProblem
NAME: tol DESCRIPTION: the tolerance desired. CLASS: double
OPTIONAL INPUT PARAMETERS
NAME: InitialLengthStep DESCRIPTION: This paramter is the length step of the gradient method that is going to be used at the begining of the process. By default, this is 0.1. CLASS: double
NAME: MinLengthStep DESCRIPTION: This paramter is the lower bound on the length step of the gradient method if the algorithm needs to have a step size lower than this size it will make the GradientMethod stop. CLASS: double
OUTPUT PARAMETERS:
All the updates will be carried inside the iCP control problem object.
Name: Unew Description: Update of the Control class: a vector valued function in a form of a double matrix
Name: Ynew Description: Update of State Vector class: a vector valued function in a form of a double matrix
Name: Jnew Description: New Value of functional class: double
Name: dJnew Description: New Value of gradient class: a vector valued function in a form of a double matrix
Name: error Description: Nthe error |dJ|/|U| or |dJ| depending on the choice of the user. class: double
Name: stop Description: if this parameter is true, the routine will tell to GradientMethod to stop class: logical
Citations:
[1] Cohen, William C.,Optimal control theory—an introduction, Donald E. Kirk, Prentice Hall, Inc., New York (1971), 452 poges. \$13.50,https://onlinelibrary.wiley.com/doi/abs/10.1002/aic.690170452