Closed avik-pal closed 11 months ago
Merging #236 (1c19fa7) into master (a6af39c) will increase coverage by
73.03%
. The diff coverage is89.28%
.
@@ Coverage Diff @@
## master #236 +/- ##
===========================================
+ Coverage 20.19% 93.22% +73.03%
===========================================
Files 9 12 +3
Lines 832 901 +69
===========================================
+ Hits 168 840 +672
+ Misses 664 61 -603
Files | Coverage Δ | |
---|---|---|
src/gaussnewton.jl | 75.00% <100.00%> (+75.00%) |
:arrow_up: |
src/levenberg.jl | 98.42% <100.00%> (+98.42%) |
:arrow_up: |
ext/NonlinearSolveLeastSquaresOptimExt.jl | 95.83% <95.83%> (ø) |
|
src/NonlinearSolve.jl | 85.00% <0.00%> (+0.78%) |
:arrow_up: |
src/utils.jl | 77.77% <50.00%> (+19.82%) |
:arrow_up: |
ext/NonlinearSolveFastLevenbergMarquardtExt.jl | 91.66% <91.66%> (ø) |
|
src/algorithms.jl | 85.71% <85.71%> (ø) |
|
src/jacobian.jl | 86.15% <77.77%> (+5.82%) |
:arrow_up: |
... and 4 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
A .= J' * J
What's this actually used for? Generally that's a bad idea
For GaussNewton and LM if it is NLLS. For a NLProblem we can get around by dropping the $J^T$ term on both sides.
I did that dispatch for the sparse arrays case, because the in place hits the generic matmul code
For Matrices it seems to hit the correct BLAS dispatches of syrk
, but doesn't do it for Banded Matrices or Sparse Matrices (there seems to be a version in Intel MKL docs but don't know how to access it)
Let's hold off a merge on this. We should be specializing the J'J
matmul to use symmetric factorizations
~Should be good to go!~
While I am at it let me wrap FastLM as well
@ChrisRackauckas I will handle the J' * J
thing in a later PR. The caching stuff makes the change non-trivial since we have to specialize on which linsolve is being used
mul!(A, J', J)
seems slower thanA .= J' * J
(@ChrisRackauckas any idea why?)