Closed QB3 closed 3 years ago
Merging #51 (cdab28c) into master (138d935) will decrease coverage by
0.85%
. The diff coverage is47.57%
.
@@ Coverage Diff @@
## master #51 +/- ##
==========================================
- Coverage 66.01% 65.15% -0.86%
==========================================
Files 35 41 +6
Lines 2704 2801 +97
Branches 247 255 +8
==========================================
+ Hits 1785 1825 +40
- Misses 853 910 +57
Partials 66 66
Impacted Files | Coverage Δ | |
---|---|---|
sparse_ho/optimizers/base.py | 0.00% <0.00%> (ø) |
|
sparse_ho/optimizers/adam.py | 12.90% <12.90%> (ø) |
|
sparse_ho/optimizers/line_search_wolfe.py | 13.15% <13.15%> (ø) |
|
sparse_ho/optimizers/gradient_descent.py | 19.04% <19.04%> (ø) |
|
sparse_ho/optimizers/line_search.py | 68.49% <68.49%> (ø) |
|
sparse_ho/ho.py | 100.00% <100.00%> (+50.00%) |
:arrow_up: |
sparse_ho/optimizers/__init__.py | 100.00% <100.00%> (ø) |
|
sparse_ho/tests/test_elastic.py | 98.95% <100.00%> (+0.04%) |
:arrow_up: |
sparse_ho/tests/test_grad_search.py | 100.00% <100.00%> (ø) |
|
sparse_ho/tests/test_logreg.py | 98.42% <100.00%> (+0.07%) |
:arrow_up: |
... and 8 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 138d935...cdab28c. Read the comment docs.
The goal of this PR is to decouple the outer optimization process from the
grad_search
function. This paves the way for naive gradient descent implementation for debugging, as well more refined optimizer such as adam.closes #50