MadNLP / DynamicNLPModels.jl

NLPModels for dynamic optimization
MIT License
11 stars 1 forks source link

Improved `DenseLQDynamicModel` construction to require less memory and build faster #31

Closed dlcole3 closed 2 years ago

dlcole3 commented 2 years ago

Reformulated source code to build the Hessian and Jacobian a block at a time for the dense model. Previously, all block matrices were built, stored, and then multiplied together, but this was very inefficient as many of the multiplications were redundant. Instead, each block within the block matrices were multiplied together only when needed within for loops.

This PR includes:

This PR does not include updating the sparse formulation (which is also somewhat inefficient) or redefining the internal functions to be non-allocating. These changes are still needed but will be met with other PRs.

dlcole3 commented 2 years ago

I also updated the build_thinplate.jl file in the examples/ folder. This used the old DenseLQDynamicBlocks structure

codecov-commenter commented 2 years ago

Codecov Report

Merging #31 (b18f65b) into main (5aa725d) will increase coverage by 0.34%. The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main      #31      +/-   ##
==========================================
+ Coverage   97.44%   97.79%   +0.34%     
==========================================
  Files           1        1              
  Lines         705      724      +19     
==========================================
+ Hits          687      708      +21     
+ Misses         18       16       -2     
Impacted Files Coverage Δ
src/DynamicNLPModels.jl 97.79% <100.00%> (+0.34%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 5aa725d...b18f65b. Read the comment docs.

dlcole3 commented 2 years ago

@frapac I made one more change after requesting reviews again (sorry, I should've added it before requesting reviews). I realized I am allocating extra memory for the Jacobian. I essentially was buliding the Jacobian twice. I removed the extra J1, J2, and J3 matrices in the most recent commit.