Open jbcaillau opened 3 months ago
@ocots hadn't you written a global list of todo's for 1.0.0? can't find nothing more than scattered issues (which is fine, but would like some prioritisation)
@jbcaillau I can't find any list. Maybe we can do one. I watch Léon Marchand at 11:40 and then, I will start a list.
Things we can do before the new release v1.0.0
.
See some info here about OptimalControl.jl
.
Reference.
Documentation.
OptimalControl.jl
documentation, we should add a page about Solvers
and briefly describe the solve
function. In CTDirect.jl
we should have a more detailed presentation of the discretization scheme with the ordering of the variables...MINPACK
and NonlinearSolve
in the tutorial about Indirect simple shooting
and Goddard. We could add a test to avoid crash if using an arm
system since MINPACK
is not supported: see here or a simple try ... catch
.Performance.
JuMP
, using work from https://github.com/0Yassine0/COTS.jlEnhancement.
Unit tests.
Bug.
manu thanks @ocots I've added 🚀 on the must-be-done-before-1.0.0 according to me. @PierreMartinon @gergaud feel free to update!
ps. hey, https://github.com/control-toolbox/OptimalControl.jl/issues/194 was the list I had in mind 👍🏽
For what I will do I have put a ⚽️
I have added MINPACK
in the indirect simple shooting tutorial and the Goddard problem. I have to check that there is no error since I have added a try...catch
to switch from hybrj
to hybrd
when an error occurs since with my computer I cannot use hybrj
but hybrd
is working.
We should also add a tutorial about how to improve performance installing HSL
, using the best linear solver, etc.
I have added
MINPACK
in the indirect simple shooting tutorial and the Goddard problem. I have to check that there is no error since I have added atry...catch
to switch fromhybrj
tohybrd
when an error occurs since with my computer I cannot usehybrj
buthybrd
is working.
@ocots nice. if not already done, I guess we can remove the dependency towards NonlinearSolve 💩
No there will be both.
No there will be both.
where do we (still) need NonlinearSolvePoo?
No there will be both.
where do we (still) need NonlinearSolvePoo?
Tutorials:
OK: thought you had gone back to MINPACK for these.
I think we should add:
Flow
function.Actually the two first points are indicated in the list above. I should add something about the Flow
function.
I think we should add:
- a tutorial about how to optimize the code, cf. https://github.com/0Yassine0/COTS.jl. what do you mean? user stuff like changing the linear solver (HSL...), using MKL...?
we are far from optimal right now, but that's a dev (not user) issue, including (but not limited to):
a tutorial (in manual part) about the solvers, cf. Document algorithms #253.
the user cannot do much about ordering
a tutorial (in manual part) about the use of the
Flow
function.right!
Actually the two first points are indicated in the list above. I should add something about the
Flow
function.
To me only two things left to go 1.0.0:
- [ ] @PierreMartinon 🚀 [Adjoint (dynamics multipliers) parsing CTDirect.jl#184](https://github.com/control-
- [x] @ocots 🚀 Flow with variables (NonFixed ocp) CTFlows.jl#15
Gents... 🙂
I think we should add:
- a tutorial about how to optimize the code, cf. https://github.com/0Yassine0/COTS.jl. what do you mean? user stuff like changing the linear solver (HSL...), using MKL...?
Yes, we could do something like: https://docs.sciml.ai/NonlinearSolve/stable/tutorials/code_optimization/
Just simple things like you said: changing the linear solver, using MKL. Saying that it is possible to change the AD backend without any comparison. It is maybe more tuning than optimising that we should say. But I think we can profit from what @0Yassine0 did to explain some possibilities. It shows the versatility of our package.
we are far from optimal right now, but that's a dev (not user) issue, including (but not limited to):
- building a better (AD)NLP model Pass sparse structure of constraints to NLPModels CTDirect.jl#183 (comment)
- Use inplace when building the NLP model CTDirect.jl#188
I was not thinking of this.
- a tutorial (in manual part) about the solvers, cf. Document algorithms #253.
- the user cannot do much about ordering
I was simply thinking about the fact that the user can choose the method to solve its problem. For the moment, there is only one case: see
But, later there will be more. I thought about a simple tutorial explaining how to choose the method with its description. We could profit to redirect to the documentation of the packages we use, like ADNLPModels.jl
, NLPModelsIpopt.jl
and CTDirect.jl
.
- a tutorial (in manual part) about the use of the
Flow
function.
- right!
Actually the two first points are indicated in the list above. I should add something about the
Flow
function.
I was simply thinking about the fact that the user can choose the method to solve its problem. For the moment, there is only one case: see
But, later there will be more. I thought about a simple tutorial explaining how to choose the method with its description. We could profit to redirect to the documentation of the packages we use, like
ADNLPModels.jl
,NLPModelsIpopt.jl
andCTDirect.jl
.
@ocots @PierreMartinon we should immediately add MadNLP: https://github.com/control-toolbox/OptimalControl.jl/issues/259 Works out of the box (check https://github.com/control-toolbox/OptimalControl.jl/pull/260) and @frapac can help in case of need 🙂
Regarding MadNLP, I have one question: I don't know how to have the CTSolveExt extension loaded if either NLPModelsIpopt
or MadNLP
are used. From the Julia doc, a package extension will load if all its dependencies are present.
We could try to have one extension per solver, but this is not how we currently work: we have a single function with a Symbol argument for the solver, not several methods with multiple dispatch...
For the sparsity pattern, the current trapeze method makes it a bit more involved: we typically store the dynamics at the end of a time step before moving on to the next step, so that we don't compute it twice. This complicates the decomposition of the constraints as a simple loop over the time steps, and the resulting sparsity pattern.
args_i = ArgsAtTimeStep(xu, docp, 0, v)
args_ip1 = ArgsAtTimeStep(xu, docp, 1, v)
index = 1 # counter for the constraints
for i in 0:docp.dim_NLP_steps-1
# state equation
index = setStateEquation!(docp, c, index, (args_i, args_ip1))
# path constraints
index = setPathConstraints!(docp, c, index, args_i, v)
# smart update for next iteration
if i < docp.dim_NLP_steps-1
args_i = args_ip1
args_ip1 = ArgsAtTimeStep(xu, docp, i+2, v)
end
end
Ideally we would only have args_i
and no update step. Maybe we should implement the implicit midpoint method and work from there.
@PierreMartinon Est-ce clair ?
Regarding MadNLP, I have one question: I don't know how to have the CTSolveExt extension loaded if either
NLPModelsIpopt
orMadNLP
are used. From the Julia doc, a package extension will load if all its dependencies are present.We could try to have one extension per solver, but this is not how we currently work: we have a single function with a Symbol argument for the solver, not several methods with multiple dispatch...
La première fonction solve
se trouve dans les sources tandis que celle sur les types concrets se trouvent dans 2 extensions séparées.
Bon je me suis planté pour l'appel avec IpoptSolver
mais tu vois l'idée.
We could try to have one extension per solver [...]
@PierreMartinon that is what I would favor, again to avoid load time issues (= just load the relevant extension when triggered by the chosen solver).
For the sparsity pattern, the current trapeze method makes it a bit more involved: we typically store the dynamics at the end of a time step before moving on to the next step, so that we don't compute it twice. This complicates the decomposition of the constraints as a simple loop over the time steps, and the resulting sparsity pattern.
@PierreMartinon oh, right. spares half the computation. nice.
@PierreMartinon bien sûr le solve
correspond à ton solve_docp
.
Voui. La je coince sur ces ù*^$ de descriptions: je voudrais rajouter et tester le cas :madnlp au lieu de :ipopt, mais je ne trouve pas la bonne syntaxe -_- J'ai essaye de completer available_methods et de passer :madnlp au solve, mais pas moyen. C'est une liste de couples, ou bien une liste tout court ?
algorithms=()
algorithms = add(algorithms, (:adnlp, :ipopt))
algorithms = add(algorithms, (:adnlp, :madnlp))
available_methods()
MethodError: Cannot `convert` an object of type
Tuple{Tuple{Symbol,Symbol},Tuple{Symbol, Symbol}} to an object of type
Tuple{Tuple{Vararg{Symbol}}}
Pas sur non plus de comprendre comment fonctionne getFullDescription avec ses priorites.
Finalement est-ce qu'on ne ferait pas mieux de revenir a de bons vieux kwargs avec des valeurs par defaut ?
Edit: bon j'ai juste enleve le type de available_methods() et ca semble marcher
First madnlp tests https://github.com/control-toolbox/CTDirect.jl/pull/191
To me only two things left to go 1.0.0:
- [x] @PierreMartinon 🚀 [Adjoint (dynamics multipliers) parsing CTDirect.jl#184](https://github.com/control-
- [x] @ocots 🚀 Flow with variables (NonFixed ocp) CTFlows.jl#15
Gents... 🙂
@PierreMartinon one step away from 1.0.0 🤞🏽
@PierreMartinon bien sûr le
solve
correspond à tonsolve_docp
.Voui. La je coince sur ces ù*^$ de descriptions: je voudrais rajouter et tester le cas :madnlp au lieu de :ipopt, mais je ne trouve pas la bonne syntaxe -_- J'ai essaye de completer available_methods et de passer :madnlp au solve, mais pas moyen. C'est une liste de couples, ou bien une liste tout court ?
algorithms=() algorithms = add(algorithms, (:adnlp, :ipopt)) algorithms = add(algorithms, (:adnlp, :madnlp))
available_methods() MethodError: Cannot `convert` an object of type Tuple{Tuple{Symbol,Symbol},Tuple{Symbol, Symbol}} to an object of type Tuple{Tuple{Vararg{Symbol}}}
Pas sur non plus de comprendre comment fonctionne getFullDescription avec ses priorites.
Finalement est-ce qu'on ne ferait pas mieux de revenir a de bons vieux kwargs avec des valeurs par defaut ?
Edit: bon j'ai juste enleve le type de available_methods() et ca semble marcher
Désolé le bon type était : Tuple{Vararg{Tuple{Vararg{Symbol}}}}
.
@ocots @PierreMartinon thanks for the active cleaning / upgrading (solve
and everything) 👍🏽. was beach / boat day on may side 🌞
The parsing for the constraints multipliers should be up in the next release. Untested though.
Before 1.0 I'd like to
The parsing for the constraints multipliers should be up in the next release. Untested though.
Before 1.0 I'd like to
- add a second discretization scheme (implicit midpoint). Adding other schemes later should be easier.
Not a priority, I think. I am not expecting big changes from midpoint (dual to trapezoidal scheme with more or less the same properties). Having a higher order would be nice (see https://github.com/control-toolbox/CTDirect.jl/issues/101#issuecomment-2278117351) but not top priority for most i use case.
- then try to pass the sparsity structure and see what performance we get. more important, yes: see https://github.com/control-toolbox/CTDirect.jl/issues/183#issuecomment-2272012662 (ADNLPModels way, instead of a fully manual process through NLPModels - if I got correctly)
But even more important (see this other exchange with @amontoison, https://github.com/control-toolbox/CTDirect.jl/issues/188#issuecomment-2269933107) Check https://github.com/control-toolbox/CTBase.jl/issues/232
I know, but I can't really pass the sparse structure with the trapeze method (more blocks than a true single step method due to each dynamics acting on two steps and not just one, and also the saving of this dynamics for the next step complicates things).
@PierreMartinon ah ok. but I think you can. independently on how you (efficiently) you compute the constraint, its value is sth like
x_{i+1} - x_i - (t_{i+1} - t_i) / 2 (f(x_i, u_i) + f(x_{i+1},u_{i+1}))
for which the derivative wrt. $x_i$'s and $u_i$'s is clear (and structured). Agree?
@PierreMartinon available for a quick call to decide on publishing 1.0?
@PierreMartinon available for a quick call to decide on publishing 1.0?
Eh, more likely next week. Do we have a deadline for 1.0, conf or something ?
@PierreMartinon available for a quick call to decide on publishing 1.0?
Eh, more likely next week. Do we have a deadline for 1.0, conf or something ?
@PierreMartinon no deadline, just not bad to proceed according to what was planned above. next Monday? 15:00?
@PierreMartinon available for a quick call to decide on publishing 1.0?
Eh, more likely next week. Do we have a deadline for 1.0, conf or something ?
@PierreMartinon no deadline, just not bad to proceed according to what was planned above. next Monday? 15:00?
@jbcaillau Bump
@PierreMartinon meeting scheduled tonight 21:00 https://univ-cotedazur.zoom.us/my/caillau
To discuss:
@PierreMartinon meeting scheduled tonight 21:00 https://univ-cotedazur.zoom.us/my/caillau
To discuss:
@ocots @gergaud please join if you're around 🤞🏾
Breakage seems to work: https://github.com/control-toolbox/CTDirect.jl/pull/209#issuecomment-2298473309
Constraints / multipliers parsing seems to work: https://github.com/control-toolbox/CTDirect.jl/pull/210
See https://github.com/control-toolbox/CTDirect.jl/issues/115#issuecomment-2252405792