Tim-Salzmann / l4casadi

Use PyTorch Models with CasADi for data-driven optimization or learning-based optimal control. Supports Acados.
MIT License
335 stars 21 forks source link

RealTimeL4CasADi Approximation Update #23

Closed abdelrahman-h-abdalla closed 7 months ago

abdelrahman-h-abdalla commented 8 months ago

Hi Tim,

Many thanks for this library! I am solving a Nonlinear TO problem using casadi where a PyTorch MLP is used as one of the constraints. Part of the input to the network are the decision variables and the other part are coming from a predefined trajectory. I am not satisfied with the solution speed with the L4CasADi model so I am trying out the Taylor approximations with RealTimeL4CasADi. I am having a hard time understanding from the examples how can I update the approximation every time I solve the problem.

My code structure is as follows:

  1. Construct the RealTimeL4CasADi and casadi functions using get_sym_params x_sym = ca.MX.sym(in_N) f_order2 = l4c.realtime.RealTimeL4CasADi(f, approximation_order=2) y_sym = f_order2(x_sym) f_casadi_function = ca.Function("f", [x_sym, self.f_order2.get_sym_params()], [y_sym])

  2. Construct the optimization problem once at the beginning of the code (I am using the Opti stack for this): for j in range(N): network_input_mx = ca.vertcat(mx_param, mx_decision_var) consraint = f_casadi_function(network_input_mx, f_order2_params) (need to use f_order2.get_params(np.array(?)) first) constraints.append(constraint) for j in range(N): self.opti.subject_to(constraints[j] > min_value)

  3. Each loop of the code: while True:

    • Set numerical parameters of new predifined trajectory and initial guesses for decision variables opti.set_value(mx_param, param_value) opti.set_initial(mx_decision_var, decision_var_init)
    • Solve the problem opti.solve()

My issue is that it's not clear to me how to update the approximation after the initial construction of the optimization problem in 2. In order to construct the constraint, a f_order2_params had to be computed with some numerical value, since f_order2.get_params() expects a numerical numpy array (right?). However, at that point, I don't know the trajectory apriori (since it changes every loop). For now I set it to a zero numpy array, but how can I update the approximation before each loop when I get a new trajectory and before each opti.solve()? I think it would make sense to update the approximation based on the input trajectory for the known parts of the network input and the initial guess of the decision variable as the variable part of the network input.

Hope that my question and code are clear.

Best regards, Abdelrahman

Tim-Salzmann commented 7 months ago

Hi Abdelrahman,

As far as I understand, your problem is that you struggle with what to pass as set-point to get_params before your first solution iteration?!

The answer to this is highly dependent on the structure of your optimization. If the input to your L4C model is just the current state, you should have this readily available. If the input includes the decision variable (as it seems to be in your case), the following (non-comprehensive) options come to my mind:

Let me know if this helps.

Best Tim

abdelrahman-h-abdalla commented 7 months ago

Hi Tim,

I think my issue is not what to initialize the approximation with, but more how to do it within the structure of the code.

I construct the optimization problem symbollicaly only once at the beginning of the code given that I need no numerical values for any parameters (step 2). I can later just update any values that are considered as inputs/parameters online repeatadly by setting their value (which changes each loop) without having to rewrite the symbolic problem again (step 3).

However, with the need to envoke get_params to compute the params for the approximated function (at the same step mentioned above), I need to have a numerical set-point apriori. This prevents me from being able to update the set-point given that the problem was already constructed once at the beggining (with some numerical set-point, regardless of its value).

The only way that seems possible to me to update the approximation using the params computed with get_params online (repeatedly) would probably be to just reconstruct the whole symbolic optimization again each time with a new set-point. That just seems inefficient to me unfortunately.

Hope the question is clearer now.

Best regards, Abdelrahman

Tim-Salzmann commented 7 months ago

Hi Abdelrahman,

I am still struggling to understand the problem exactly.

I can later just update any values that are considered as inputs/parameters online repeatadly by setting their value (which changes each loop) without having to rewrite the symbolic problem again (step 3).

The approximation parameters are a input/parameter to the function:

f_casadi_function = ca.Function("f", [x_sym, self.f_order2.get_sym_params()], [y_sym])

However, with the need to envoke get_params to compute the params for the approximated function (at the same step mentioned above), I need to have a numerical set-point apriori.

get_params will compute the numerical approximation parameters from a numerical setpoint. These numerical approximations can then be passed as parameters to the symbolic graph without re-creating it.

Thus the process would be roughly:

1) Create graph symbolically with get_sym_params as parameters (as you do) 2) While 2.1) Solve problem with (initial) approx parameters as parameters to symbolic graph 2.2) Extract numerical setpoint from last solution 2.3) call `get_params' with new set-point as argument to get new approx parameters

If this is still not what you meant maybe you could provide a minimum (not working) example of what you are trying to do?

Best Tim

abdelrahman-h-abdalla commented 7 months ago

Hi Tim

These numerical approximations can then be passed as parameters to the symbolic graph without re-creating it.

Yes I missed that! It's as simple as creating an Opti parameter (with size similar to the number of the Taylor parameters) and just updating it each time with opti.set_value(f_order2_params, f_order2.get_params(np.zeros(...)))

It works now!

Thanks a lot, Abdelrahman

Tim-Salzmann commented 7 months ago

Awesome, feel free to close the issue if it is solved!

Best Tim