Closed dlcole3 closed 2 years ago
Merging #39 (966bb47) into main (5ff3dee) will decrease coverage by
0.25%
. The diff coverage is97.36%
.
@@ Coverage Diff @@
## main #39 +/- ##
==========================================
- Coverage 97.56% 97.30% -0.26%
==========================================
Files 1 1
Lines 1066 1186 +120
==========================================
+ Hits 1040 1154 +114
- Misses 26 32 +6
Impacted Files | Coverage Δ | |
---|---|---|
src/DynamicNLPModels.jl | 97.30% <97.36%> (-0.26%) |
:arrow_down: |
Help us with your feedback. Take ten seconds to tell us how you rate us.
LQJacobianOperator
now stores the implicit Jacobian as 3 tensors rather than 1 matrix (this makes the block matrices contiguous in memory). I also updated themul!
functions andadd_jtsj!
to work with these tensors (mul!
functions now take 2 for loops rather than 1)._dnlp_unsafe_wrap
to access the data from the tensors (results in less allocations than callingview
withinLinearAlgebra.mul!
).mul!
andadd_jtsj!
functions that work withCuArrays
. These functions are defined with multiple dispatch, and the functions for calculating $Jx$ and $J^T x$ usegemm_strided_batched!
. This could be optimized more in the futureadd_jtsj!
withCuArrays
is not close to optimal. Unfortunately,mul!(C, A, B, alpha, beta)
does not work withCuArrays
whenC
is accessed with aview
, so I had to define a new storage matrix,H_sub_block
, withinLQJacobianOperator
. I expect this will change in the future when I update this function withCUBLAS
functions.add_jtsj!
andmul!
using the GPU give the correct answer.