Closed ablaom closed 3 years ago
Merging #179 (67409a5) into dev (a40bdfd) will decrease coverage by
0.14%
. The diff coverage is95.12%
.
@@ Coverage Diff @@
## dev #179 +/- ##
==========================================
- Coverage 90.74% 90.59% -0.15%
==========================================
Files 8 9 +1
Lines 216 234 +18
==========================================
+ Hits 196 212 +16
- Misses 20 22 +2
Impacted Files | Coverage Δ | |
---|---|---|
src/types.jl | 85.71% <0.00%> (-14.29%) |
:arrow_down: |
src/common.jl | 96.77% <100.00%> (+0.22%) |
:arrow_up: |
src/core.jl | 90.00% <100.00%> (-0.79%) |
:arrow_down: |
src/penalized_losses.jl | 100.00% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update a40bdfd...67409a5. Read the comment docs.
@ayush-1506 Would you like to review this PR? I've tested it locally on a GPU.
@DilumAluthge How do I put the GPU tests back for PR's onto dev?
@ablaom Sure, please give me a day.
This PR:
fit!
method. This change was natural in addressing above and anticipates further transfer of responsibility to planned data front-end (see https://alan-turing-institute.github.io/MLJ.jl/dev/adding_models_for_general_use/#Implementing-a-data-front-end and #97).@ToucheSir Further to Julia Discourse discussion, no change was actually necessary to the core
train!
loop to add regularization. It's just that the loss function passed to this loop now depends on thechain
(Flux model), when L2/L1 regularisation parameters are non-trivial. To avoid the array mutation error I needed to avoid broadcasting in the computation of the penalty here. Any performance suggestions re these two bits of code appreciated.