infiniteopt / InfiniteOpt.jl

An intuitive modeling interface for infinite-dimensional optimization problems.
https://infiniteopt.github.io/InfiniteOpt.jl/stable
MIT License
251 stars 17 forks source link

Failure with polynomial dynamics in OC problem #339

Closed baggepinnen closed 4 months ago

baggepinnen commented 4 months ago

The following problem solves well, but changing the dynamics @constraint(m, ∂(v, t) == u) to @constraint(m, ∂(v, t) == u^3) makes the optimizer do nothing

using InfiniteOpt, Ipopt
m = InfiniteModel(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 5));
@infinite_parameter(m, t in [0, 8], num_supports = 101)
@variables(m, begin
    # state variables
    x, Infinite(t)
    v, Infinite(t)
    u, Infinite(t)
end)
@objective(m, Min, ∫(abs2(x) + abs2(v) + abs2(u), t))
@constraint(m, x(0) == 1)
@constraint(m, v(0) == 0)
@constraint(m, ∂(x, t) == v)
@constraint(m, ∂(v, t) == u)
InfiniteOpt.optimize!(m)

t_opt = value.(t);
x_opt = value.(x);
v_opt = value.(v);
u_opt = value.(u)
plot(t_opt, [x_opt v_opt u_opt], label=["x" "v" "u"], c=[:blue :orange :green])

With dv = u

image

With dv = u^3

image

Desktop (please complete the following information):

julia> VERSION
v"1.10.2"

  [20393b10] InfiniteOpt v0.5.8
  [b6b21f68] Ipopt v1.6.2

Side question, why does u always start at 0? There is no constraint on ∂(v, t) == u that should force this?

pulsipher commented 4 months ago

Hi @baggepinnen,

Ipopt does converge to the optimal solution in both cases. It just appears the u^3 case returns a simple answer. To confirm this and make the underlying discretization scheme more clear, here is the model coded up in JuMP that uses implicit Euler:

using JuMP, Ipopt, Plots
m = Model(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 5));
@variables(m, begin
    # state variables
    x[1:101]
    v[1:101]
    # control variable
    u[2:101] # first index is unneeded
end)
@objective(m, Min, sum(abs2(x[i]) + abs2(v[i]) + abs2(u[i]) for i in 2:101))
@constraint(m, x[1] == 1)
@constraint(m, v[1] == 0)
@constraint(m, [i = 1:100], x[i+1] == 8/100 * v[i+1] + x[i])
@constraint(m, [i = 1:100], v[i+1] == 8/100 * u[i+1]^3 + v[i])
optimize!(m)

plot(0:8/100:8, value.(x))

This yields the same answer that InfiniteOpt gives. What answer do you expect to get?

With regard to why u(0) = 0 in problems like this one, it is a control variable whose initial value has not impact on the problem because of the state initial conditions when an implicit derivative approximation scheme is used. Hence, the optimizer is free to have it be whatever value it likes since it is unconstrained. In this case, the solver chooses 0 since that helps to minimize the objective.

baggepinnen commented 4 months ago

Hmm, this seems strange, if I have zero penalty on u and v I still get the same solution, I can't see how this can be a local optimum?

using InfiniteOpt, Ipopt
m = InfiniteModel(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 5));
@infinite_parameter(m, t in [0, 8], num_supports = 101)
@variables(m, begin
    # state variables
    x, Infinite(t)
    v, Infinite(t)
    u, Infinite(t)
end)
@objective(m, Min, ∫(abs2(x) + 0*abs2(v) + 0*abs2(u), t))
@constraint(m, x(0) == 1)
@constraint(m, v(0) == 0)
@constraint(m, ∂(x, t) == v)
@constraint(m, ∂(v, t) == u^3)
InfiniteOpt.optimize!(m)

t_opt = value.(t);
x_opt = value.(x);
v_opt = value.(v);
u_opt = value.(u)
plot(t_opt, [x_opt v_opt u_opt], label=["x" "v" "u"], c=[:blue :orange :green])
pulsipher commented 4 months ago

A key thing to keep in mind is that nonconvex NLPs are can very sensitive to the initial guess when solved locally. In this case, changing the initial guess of u gives a different answer:

using InfiniteOpt, Ipopt, Plots
m = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(m, t in [0, 8], num_supports = 101)
@variables(m, begin
    # state variables
    x, Infinite(t)
    v, Infinite(t)
    # control variable
    u, Infinite(t), (start = -1)
end)
@objective(m, Min, ∫(abs2(x) + abs2(v) + abs2(u), t))
@constraint(m, x(0) == 1)
@constraint(m, v(0) == 0)
@constraint(m, ∂(x, t) == v)
@constraint(m, ∂(v, t) == u^3)
optimize!(m)

t_opt = value(t)
x_opt = value(x)
v_opt = value(v)
u_opt = value(u)
plot(t_opt, [x_opt v_opt u_opt], label=["x" "v" "u"], c=[:blue :orange :green])

image

Changing the initial guess to -10 gets a another local solution: image

This all just highlights just how important good initial guess values are for NLPs. You can also try using a global solver like Alpine, SCIP, or Baron, but these may take a while to converge.

baggepinnen commented 4 months ago

Interesting, it looks like the optimal solution has impulsive control action, and the solvers are having a hard time finding this. Here's a solution with 101 support points obtained by SCIP, with a fairly large optimality gap, but it looks very similar in nature to an optimal solution with tiny optimality gap obtained for 11 support points.

image

Thanks for your help and sorry for my confused issue!

pulsipher commented 4 months ago

No need to apologize, I am glad we were able to clear things up.