LabeliaLabs / distributed-learning-contributivity

Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.
https://www.labelia.org
Apache License 2.0
56 stars 12 forks source link

Change results saving behaviour with results saved by default #337

Closed bowni closed 2 years ago

bowni commented 3 years ago

Changes:

Fixes #329, #331

codecov-commenter commented 2 years ago

Codecov Report

Merging #337 (8da900d) into master (4fb247a) will decrease coverage by 0.06%. The diff coverage is 70.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #337      +/-   ##
==========================================
- Coverage   80.82%   80.76%   -0.07%     
==========================================
  Files          15       15              
  Lines        3088     3099      +11     
==========================================
+ Hits         2496     2503       +7     
- Misses        592      596       +4     
Impacted Files Coverage Δ
mplc/scenario.py 80.06% <61.90%> (-2.44%) :arrow_down:
mplc/experiment.py 83.33% <83.33%> (+2.46%) :arrow_up:
mplc/constants.py 100.00% <100.00%> (ø)
mplc/multi_partner_learning/basic_mpl.py 85.71% <100.00%> (ø)
mplc/multi_partner_learning/utils.py 86.86% <0.00%> (+0.72%) :arrow_up:
mplc/models.py 79.38% <0.00%> (+1.03%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 4fb247a...8da900d. Read the comment docs.

RomainGoussault commented 2 years ago

Could we add some test for this @bowni? I was thinking of running an experiment and checking there is a result file that was created. (and same thing for running a scenario).

@bowni I think it's still useful to add a test. What do you think?