infiniteopt / InfiniteOpt.jl

An intuitive modeling interface for infinite-dimensional optimization problems.
https://infiniteopt.github.io/InfiniteOpt.jl/stable
MIT License
251 stars 17 forks source link

The InfiniteOpt does't work when deriv(x[i], t) .== -gradf(x[i]) in R^d (d>=2) which is vector defined by user's function gradf. #342

Closed askuyue closed 2 months ago

askuyue commented 3 months ago

Dear Thanks very much for your great work of InfiniteOpt.jl.

Describe the bug In @constrait, it can't work if variables xi \in R^d (d >=2). Here is errors:

all code for ERROR 1

# Defining my function
function gradf(x::Vector)
    dimx = length(x);
    gradf = zeros(dimx);
    for i=1:dimx
        gradf[i] = x[i] + sin(x[i]);
    end
    return gradf;
end
### Pacages
# Import packages
using JuMP, InfiniteOpt, LinearAlgebra, AmplNLWriter
# Import open-source solvers
using Ipopt
## Solver.Optimizer
wjm = InfiniteModel(Ipopt.Optimizer)
## The time domain t in [0,T] 
@infinite_parameter(wjm, t in [0, 10], num_supports = 101)
## Now let's specify the decision variables:
@variables(wjm, begin
    ## unknown variables
    x[1:3], Infinite(t) # x[i](t) in R^d
end)
## Set the initial conditions of variable x at time t = 0 :
@constraint(wjm, X0[i=1:3], x[i](0) .== rand(3))
# ODEs
@constraint(wjm, ODExt[i=1:3], deriv(x[i], t) .== -gradf(x[i]) + sum( 10/sqrt(1+sum((x[j]-x[i]).^2)).*(x[j]-x[i]) for j in 1:3 ));
 # solving model
optimize!(wjm)

all code for ERROR 2

# Defining my function
function gradf(x)
    dimx = length(x);
    gradf = zeros(dimx);
    for i=1:dimx
        gradf[i] = x[i] + sin(x[i]);
    end
    return gradf;
end
### Pacages
# Import packages
using JuMP, InfiniteOpt, LinearAlgebra, AmplNLWriter
# Import open-source solvers
using Ipopt
## Solver.Optimizer
wjm = InfiniteModel(Ipopt.Optimizer)
## The time domain t in [0,T] 
@infinite_parameter(wjm, t in [0, 10], num_supports = 101)
## Now let's specify the decision variables:
@variables(wjm, begin
    ## unknown variables
    x[1:3], Infinite(t) # x[i](t) in R^d
end)
## Set the initial conditions of variable x at time t = 0 :
@constraint(wjm, X0[i=1:3], x[i](0) .== rand(3))
# ODEs
@constraint(wjm, ODExt[i=1:3], deriv(x[i], t) .== -gradf(x[i]) + sum( 10/sqrt(1+sum((x[j]-x[i]).^2)).*(x[j]-x[i]) for j in 1:3 ));
 # solving model
optimize!(wjm)

Desktop (please complete the following information):

Thanks for replying.

askuyue commented 3 months ago

And if i delete -grad(x[i]) in ODExt[i=1:3], there still exits errors

Exception of type: TOO_FEW_DOF in file "Interfaces/IpIpoptApplication.cpp" at line 655:
 Exception message: status != TOO_FEW_DEGREES_OF_FREEDOM evaluated false: Too few degrees of freedom (rethrown)!

EXIT: Problem has too few degrees of freedom.

about the following code.

### Pacages
# Import packages
using JuMP, InfiniteOpt, LinearAlgebra, AmplNLWriter
# Import open-source solvers
using Ipopt
## Solver.Optimizer
wjm = InfiniteModel(Ipopt.Optimizer)
## The time domain t in [0,T] 
@infinite_parameter(wjm, t in [0, 10], num_supports = 101)
## Now let's specify the decision variables:
@variables(wjm, begin
    ## unknown variables
    x[1:3], Infinite(t) # x[i](t) in R^d
end)
## Set the initial conditions of variable x at time t = 0 :
@constraint(wjm, X0[i=1:3], x[i](0) .== rand(3))
# ODEs
@constraint(wjm, ODExt[i=1:3], deriv(x[i], t) .== sum( 10/sqrt(1+sum((x[j]-x[i]).^2)).*(x[j]-x[i]) for j in 1:3 ));
 # solving model
optimize!(wjm)

i really dont' know that which part of CODE is wrongs. So i need your help. Thanks very much!

Sincerely Z.-C. Hong.

pulsipher commented 3 months ago

Hi @askuyue

Welcome! This behavior is expected since your code has a few mistakes. For future reference, it is preferred that usage questions be discussed in the Discussions forum: https://github.com/infiniteopt/InfiniteOpt.jl/discussions since this does not denote a bug with InfiniteOpt itself.

Regarding the gradf function:

You call gradf(x[i]) where x[i] is a single variable (not a vector) which leads to all the errors you reference. Moreover, it is not clear what you are trying to accomplish with the use of that function since I believe you intend for it to return a vector into a scalar equation which is ill-posed.

In general, it seems you are mixing up scalarized and vectorized operators and expressions. To become, better acquainted with the syntax, I highly recommend reviewing JuMP's introductory tutorials. InfiniteOpt uses exactly the same modelling interface as JuMP, just with the addition of specialized infinite variables.

I can provide more specific direction on how to fix this if you can provide the mathematical version of the model you are trying to solve.

Regarding the Problem has too few degrees of freedom error from Ipopt:

The problem here is the boundary condition. Again the problem seems to be mistakenly mixing up vectors and scalars. Here, x[i](0) produces a scalar variable and rand(3) produces a 3 element vector. Because you use the vectorized .== operator it proceeds to create three constraints for each x[i](0) which contradict each other since they each are fixed to a different random number. This results in an infeasible problem. We can fix this problem by instead having:

@constraint(wjm, X0[i=1:3], x[i](0) == rand())

Conversely, we can use vectorized notation:

@constraint(wjm, X0, restrict.(x, 0) .== rand(3))

To avoid making mistakes, I strongly recommend working with a scalarized model.

Where to go next

To better help you, it would be helpful to know what problem you are actually trying to solve. To continue that discussion, I recommend posting in the Discussions forum.

askuyue commented 3 months ago

Sorry, it is my faults of they are not bugs.

Thanks very much your nice.

pulsipher commented 2 months ago

If there are no further issues, I will close this issue. Please feel welcome to post in the discussions forum with any additional questions.