Closed dlcole3 closed 2 years ago
Merging #32 (641c280) into main (fe9901e) will increase coverage by
0.18%
. The diff coverage is100.00%
.
@@ Coverage Diff @@
## main #32 +/- ##
==========================================
+ Coverage 97.79% 97.97% +0.18%
==========================================
Files 1 1
Lines 724 790 +66
==========================================
+ Hits 708 774 +66
Misses 16 16
Impacted Files | Coverage Δ | |
---|---|---|
src/DynamicNLPModels.jl | 97.97% <100.00%> (+0.18%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update fe9901e...641c280. Read the comment docs.
The sparse model construction was very slow because of several for loops for building the sparse
H
andJ
matrices that contained setting sparse arrays with dense arrays (e.g.,H[..., ...] = Q
). This PR now builds theH
andJ
matrices by instead building vectors forcolptr
,rowval
, andnzval
from the corresponding data, and then using theSparseMatrixCSC
constructor to create the sparse array from these vectors. Thecolptr
,rowval
, andnzval
forH
andJ
are instantiated within the_build_sparse_lq_dynamic_model
function. The_set_sparse_H!
and_set_sparse_J!
functions are then called to set this data. For large problems (N, ns, and nu ~100+), there is significant speedup from these changes.