MadNLP / DynamicNLPModels.jl

NLPModels for dynamic optimization
MIT License
11 stars 1 forks source link

Added `get_u` and `get_s` functions #24

Closed dlcole3 closed 2 years ago

dlcole3 commented 2 years ago

Added functions to query the solution. Functions take an input of the solver reference and the model. They return $s$ and $u$ solution values for all sparse cases. They return $u$ solution only for the condensed case with no K matrix. Additional functionalities to query the $s$ and $u$ variables in the condensed case will be addressed later.

codecov-commenter commented 2 years ago

Codecov Report

Merging #24 (1a58b9b) into main (f8547b4) will decrease coverage by 0.09%. The diff coverage is 94.73%.

@@            Coverage Diff             @@
##             main      #24      +/-   ##
==========================================
- Coverage   96.32%   96.23%   -0.10%     
==========================================
  Files           1        1              
  Lines         599      637      +38     
==========================================
+ Hits          577      613      +36     
- Misses         22       24       +2     
Impacted Files Coverage Δ
src/DynamicNLPModels.jl 96.23% <94.73%> (-0.10%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update f8547b4...1a58b9b. Read the comment docs.

sshin23 commented 2 years ago

@frapac agreed that it would be good to define separate data structure for dense/sparse case. In terms of the naming, I'd call them

SparseLQDynamicModel <: AbstractLQDynamicModel
DenseLQDynamicModel <: AbstractLQDynamicModel
frapac commented 2 years ago

@sshin23 I agree with the naming you just proposed:

SparseLQDynamicModel <: AbstractLQDynamicModel
DenseLQDynamicModel <: AbstractLQDynamicModel

Just: which one is supposed to implement the condensed model?

sshin23 commented 2 years ago

@frapac DenseLQDynamicModel implements the dense form (states are eliminated), and condensing (eliminating the inequality block) is done internally in MadNLP

dlcole3 commented 2 years ago

@sshin23, do you want me to go ahead and make the change to

SparseLQDynamicModel <: AbstractLQDynamicModel
DenseLQDynamicModel <: AbstractLQDynamicModel

now, or should I do this later? Over Slack, it sounded like you suggested doing this later, but I can address it now if desired

dlcole3 commented 2 years ago

Also, I have been mixed up on condensed and dense. I think I will rename some of my functions to be dense to reflect the proper terminology

sshin23 commented 2 years ago

@dlcole3 You can work on the new data structure in a separate PR

dlcole3 commented 2 years ago

It is not exact. It is very close (<1e-10).