oxfordcontrol / SOSTOOLS

A free MATLAB toolbox for formulating and solving sums of squares (SOS) optimization programs
55 stars 14 forks source link

SOS solution on adding range constraints #16

Closed Yngjx closed 1 year ago

Yngjx commented 1 year ago

Hi, I would like to ask a question about adding range constraints to demo2. If I add a constraint on the input state x to the solution method Lyapunov Function Search, is it still possible to continue solving it? I would be grateful if you could provide an example.

djagt commented 1 year ago

Depending on what you're thinking of exactly, that is certainly possible.

For example, suppose you want to test only local stability, on a ball of radius 5 around the origin. We may represent this domain by the semialgebraic set S={x: g(x)>=0}, where g(x)= 5- x1^2 +x2^2 +x3^2. Then, we need the Lyapunov function V(x) to decrease only for x in S, so the derivative (dV/dx f(x)) need only be nonpositive for x satisfying g(x)>=0. To enforce this, we may impose the constraint -(dV/dx)f(x) - s(x)g(x)>=0, for some SOS variable s(x). If there exists an SOS function s(x) such that this equality is satisfied, then, whenever g(x)>=0, we have -(dV/dx)f(x) >= s(x)g(x)>= 0. Thus, this constraint guarantees that (dV/dx)f(x) <=0 for x satisfying g(x)>=0.

To do this in sodemo2, you can replace line 28

expr = -(diff(V,x1)*f(1)+diff(V,x2)*f(2)+diff(V,x3)*f(3));
prog = sosineq(prog,expr);

with the following

g = 5 - vars'*vars;                             % g(x) = 5 - (x1^2 +x2^2 +x3^2);
[prog,s] = sossosvar(prog,monomials(vars,4));   % s(x) >= 0 for all x
expr = -jacobian(V,vars)*f - s*g;               % -(dV/dx)*f(x) - s(x)*g(x)
prog = sosineq(prog,expr);

Then, instead of testing whether dV/dxf(x)<=0 for all x, you will be testing whether there exists an SOS function s(x) such that dV/dx<=-s(x)g(x) for all x, thus ensuring dV/dx<=0 whenever g(x)>=0. Note that, since the system in sosdemo2 is globally stable, any such local stability test will (should) be feasible, simply setting s(x)=0.

Please let me know if something is unclear, or if this is not what you were looking for.

Yngjx commented 1 year ago

Thank you for your reply May be I didn't express clearly, what I mean is to add range constraints to the input state x. For example -6<x1<6, -6<x2<6, in this case, can I still solve it?

djagt commented 1 year ago

There is no direct way of imposing constraints on the variables x, if that is what you mean. These variables are free variables in the optimization program, so the program (V(x)>0 and dV/dt(x)<=0) is solved for all real valued x.

However, if you want to solve the optimization program only for x in a certain domain (e.g. -6<x1,x2<6), then the approach I mentioned in the previous comment is the standard way to do so. You formulate the desired domain on which to solve the program as a semialgebraic set, e.g. as S:={x\in R^2: g1(x)>=0, g2(x)>=0, g3(x)>=0}, where g1(x)=6^2-x1^2, g2(x)=6^2-x2^2 and g3(x)=6^2+6^2-(x1^2+x2^2). (Here I add the redundant constraint g3(x)>=0 to reduce conservatism). Then you incorporate these functions gi(x) into the constraint dV/dt(x)<=0 in such a way that this constraint need only be satisfied for x such that gi(x)>=0: -(dV/dx)f(x) - s1(x)g(x) -s2(x)g2(x) -s3(x)g3(x)>=0, for SOS variables si. You can declare this as

g1 = 6^2 - x1*x1;                                   % g1(x) = 6^2 - x1^2;
g2 = 6^2 - x2*x2;                                   % g2(x) = 6^2 - x2^2;
g3 = 2*6^2 - [x1,x2]*[x1;x2];                       % g3(x) = 6^2+6^2 - (x1^2 +x2^2);
[prog,s1] = sossosvar(prog,monomials(vars,4));      % s1(x) >= 0 for all x
[prog,s2] = sossosvar(prog,monomials(vars,4));      % s2(x) >= 0 for all x
[prog,s3] = sossosvar(prog,monomials(vars,4));      % s3(x) >= 0 for all x
expr = -jacobian(V,vars)*f - s1*g1 - s2*g2 - s3*g3; % -(dV/dx)*f(x) - s(x)*g(x)
prog = sosineq(prog,expr);

Other than this, however, I'm afraid there is no direct way of enforcing domain constraints on the variables x, solving the optimization program only for x in some domain.

Please let me know if this still doesn't answer your question.

Yngjx commented 1 year ago

Thank you@djagt! Regarding the range constraint, I've figured it out after your answer. One other question: if the controller in the state equation is a nonlinear controller, that is, if it contains a nonlinear part in this state equation. As an example, the system equation of state is determined by the following equation:

u=-(0.3357*x1 + 0.05*x2);               %controller
if u>20                                 %nonlinear definition
   u=20;
elseif u<-20
   u=-20;
else
   u=u;
end
f = [x2;19.62*x1 - 2.6667*x2 + 26.6667*u];%state equation

Can it still be solved in the above example?

djagt commented 1 year ago

That is an interesting problem, and one I am afraid I do not have any experience with.

I imagine one approach you could take is to decompose the vector field into three zones: one where u<=-20, one where u>=20, and one in between. You could describe each of these zones by a semialgebraic set, using

% Declare the input and equations defining different zones
u = -(0.3357*x1 + 0.05*x2);
gl = u + 20;        % gl>=0 implies u>=-20
gu = u - 20;        % gu>=0 implies u>=20

So that u(x)<=-20 when gl(x)<=0, u(x)\in[-20,20] if gl(x)>=0 and gu(x)<=0, and u(x)>=20 if gu(x)>=20. In each of these sets S1={x: gl(x)<=0}, S2={x: gl(x)>=0, gu(x)<=0}, and S3={x: gu(x)>=20}, you can then define a different vector field, depending on the associated value of u in this set:

% Define vector field in three different zones:
f1 = [x2; 19.62*x1 - 2.6667*x2 + 26.6667*-20];  % u<=20:        gl<=0
f2 = [x2; 19.62*x1 - 2.6667*x2 + 26.6667*u];    % u\in[-20,20]: gl>=0, gu<=0
f3 = [x2; 19.62*x1 - 2.6667*x2 + 26.6667*20];   % u>=20:        gu>=0

Finally, if you've defined some candidate Lyapunov functional V, you can then enforce dV/dx*fj <=0 locally in each set Sj using the methods from the previous comments:

% % Enforce dV/dt<=0 in each zone:
% Zone 1: u<=20, gl<=0
[prog,s1] = sossosvar(prog,monomials(vars,4));  % s1(x) >= 0 for all x
expr1 = -jacobian(V,vars)*f1 + s1*gl;           % -dV/dt + s1*gl
prog = sosineq(prog,expr1);                     % -dV/dt >= -s1*gl

% Zone 2: u\in[-20,20], gl>=0 and gu<=0
[prog,s2] = sossosvar(prog,monomials(vars,4));  % s2(x) >= 0 for all x
[prog,s3] = sossosvar(prog,monomials(vars,4));  % s3(x) >= 0 for all x
expr2 = -jacobian(V,vars)*f2 - s2*gl + s3*gu;   % -dV/dt - s2*gl + s1*gu
prog = sosineq(prog,expr2);                     % -dV/dt >= s2*gl - s3*gu

% Zone 2: u>=20, gu>=0
[prog,s4] = sossosvar(prog,monomials(vars,4));  % s4(x) >= 0 for all x
expr3 = -jacobian(V,vars)*f3 - s4*gu;           % -dV/dt - s4*gu
prog = sosineq(prog,expr3);                     % -dV/dt >= s4*gu

These constraints ensure that e.g. dV/dx*f1(x) <=0 for x\in S1 (so when u<=-20), but not necessarily for x\notin S1 (so not necessarily when u>=-20). This should ensure that the Lyapunov function V decreases along the appropriate vector field in each of the three zones.

I think for your case this approach should be okay, since each of the three sets S1, S2 and S3 is quite "nice", in the sense that each set is a (convex) polytope. If your input u would be e.g. quadratic in x, this approach might be a lot more conservative. That being said, even in your case, an initial test I ran found the problem to be infeasible, though that might be due to a variety of reasons.

Other than this, I'm not aware of any methodology for implementing a problem such as the one you described. Perhaps, though, there is some literature out there that might give you some more insight.

I hope this helps.

Yngjx commented 1 year ago

This is a timely response!

Actually, this follow-up question about the nonlinear controller is related to the range constraint on state x that I mentioned earlier.

From this definition of a nonlinear controller, it is clear that the value of the control input u is determined by the state x. So it is said that in the whole system, this nonlinear controller will make the system produce a specific region of attraction(ROA).

My idea is to try to use your SOStool to find a SOS lyapunov function that fits this ROA as well as possible. I made a small modification to the idea you provided and the code is as follows:

% Declare the input and equations defining different zones
u = -(1.3357*x1 + 0.05*x2);
gl = -u - 20;        % gl>=0 implies u<=-20   (Modification)
gu = u - 20;        % gu>=0 implies u>=20

% Define vector field in three different zones:
f1 = [x2; 19.62*x1 - 2.6667*x2 + 26.6667*-20];  % u<=-20:        gl>=0,gu<=0  (Modification)
f2 = [x2; 19.62*x1 - 2.6667*x2 + 26.6667*u];    % u\in[-20,20]: gl<=0, gu<=0
f3 = [x2; 19.62*x1 - 2.6667*x2 + 26.6667*20];   % u>=20:        gu>=0,gl<=0   (Modification)

% % Enforce dV/dt<=0 in each zone:
% Zone 1: u<=-20, gl>=0,gu<=0                                                               (Modification)
[prog,s1] = sossosvar(prog,monomials(vars,4));  % s1(x) >= 0 for all x
expr1 = -jacobian(V,vars)*f1 + s1*gl;           % -dV/dt + s1*gl
prog = sosineq(prog,expr1);                     % -dV/dt >= -s1*gl

% Zone 2: u\in[-20,20], gl<=0 and gu<=0                                                     (Modification)
[prog,s2] = sossosvar(prog,monomials(vars,4));  % s2(x) >= 0 for all x
[prog,s3] = sossosvar(prog,monomials(vars,4));  % s3(x) >= 0 for all x
expr2 = -jacobian(V,vars)*f2 - s2*gl + s3*gu;   % -dV/dt - s2*gl + s1*gu
prog = sosineq(prog,expr2);                     % -dV/dt >= s2*gl - s3*gu

% Zone 2: u>=20, gu>=0,gl<=0                                                                 (Modification)
[prog,s4] = sossosvar(prog,monomials(vars,4));  % s4(x) >= 0 for all x
expr3 = -jacobian(V,vars)*f3 - s4*gu;           % -dV/dt - s4*gu
prog = sosineq(prog,expr3);                     % -dV/dt >= s4*gu

But really, the essence of the above code has to start with the range of x, because here x will determine the input of u.

For example, let x1 belong to [-180,180], x2 belong to [-360,360], and then go to determine the value of u. On this basis the equation of state of the system can be determined and finally the SOS Lyapunov function suitable for the system is found.

I don't know if this is feasible?

djagt commented 1 year ago

I am not sure I fully understand what you intend to do, so please let me know if I'm misinterpreting something.

Given this state-dependent input, you want to establish a ROA for the system. If you can find a LF candidate that (strictly) decreases in a certain region x\in S, then the system is locally asymptotically stable in S, hence S is a ROA.

If this is indeed what you are after, I suppose you could just add the range constraints on x to the constraints imposed in the proposed problem. For example, you can do something like

r = 360^2;
gROA = r^2 - [x1,x2]*[x1;x2];                       % gROA(x) >=0 implies  x in ball of radius r;
[prog,s1] = sossosvar(prog,monomials(vars,4));      % s1(x) >= 0 for all x
[prog,s2] = sossosvar(prog,monomials(vars,4));      % s2(x) >= 0 for all x
[prog,s3] = sossosvar(prog,monomials(vars,4));      % s3(x) >= 0 for all x

expr1_new = expr1 - s1*gROA;                        % if expr1_new>=0, then expr1 >= s1*gROA
prog = sosineq(prog,expr1_new);                     % expr1 >= s1*gROA, so expr1 >=0 whenever gROA(x)>=0;

expr2_new = expr2 - s2*gROA;                        % if expr2_new>=0, then expr2 >= s2*gROA
prog = sosineq(prog,expr2_new);                     % expr2 >= s2*gROA, so expr2 >=0 whenever gROA(x)>=0;

expr3_new = expr 3- s3*gROA;                        % if expr3_new>=0, then expr3 >= s3*gROA
prog = sosineq(prog,expr3_new);                     % expr3 >= s3*gROA, so expr3 >=0 whenever gROA(x)>=0;

where expr1 etc. are as before. Then, you essentially test locally stability in each of the three zones in a ball of radius r, where you can perform bisection on the value of r to find a maximal region of attraction.

SIDEBAR: For a region of attraction, you will of course need to make sure the LF to be strictly decreasing in the domain, e.g. enforce -dV/dt <= c|x|^2 - sc(x) for some positive c>0 and SOS function sc. Also, the coercivity and boundedness of V (e.g. b|x|^2 +sb(x) >= V(x) >= a*|x|^2 + sa(x)) would only need to be enforce in the region in which you test stability (the ball of radius r).

I don't know if this what you are referring to. I'm not sure how else to test local stability given such an input u.

Do keep in mind that, in your modifications, you now have gl>=0 implying u<=20, which is fine of course, but then you'll also need to modify e.g. expr1 to be

expr1 = -jacobian(V,vars)*f1 - s1*gl;

That way, enforcing positivity of expr1 ensures that -dV/dt >= s1*gl and thus dV/dt <= -s1*gl, so that dV/dt(x)<=0 whenever gl(x)>=0 (but not necessarily when gl(x)<=0). You'll have to make similar adjustments to expr2 and expr3.

Please note that this type of problem is new to me as well. I do believe the theory is sound, but of course, I recommend you do your own research as well.

Let me know if something is unclear, or if I misunderstood your question.

Yngjx commented 1 year ago

Yes, as you said, I need to take some time to do my own research and need to think properly about whether my ideas can be combined with the tools you have proposed.

Of course, thank you very much for your patient answer, it has benefited me a lot. I can tell from your reply that you are a serious and responsible researcher.

Once again, thank you and have a nice life!

If I make any progress, I will keep you informed.

djagt commented 1 year ago

No problem at all, happy to help!

Please do let me know if you have any other questions, or encounter any issues. I'd be interested in knowing how you end up tackling this problem.

I will close this issue for now, but feel free to reopen it, or open a new one, if you want to reach out.