LabeliaLabs / distributed-learning-contributivity

Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.
https://www.labelia.org
Apache License 2.0
56 stars 12 forks source link

Scenario/mpl log show inconsistent syntax #323

Open arthurPignet opened 3 years ago

arthurPignet commented 3 years ago

See above: One example, sometime there is #, sometime there isn't.

2021-01-14 17:54:30 | INFO | ### Splitting data among partners:
2021-01-14 17:54:30 | INFO | Train data split: 2021-01-14 17:54:36 | INFO | Partner #0: 9000 samples with labels [8 9] 2021-01-14 17:54:36 | INFO | Partner #1: 9000 samples with labels [6 7] 2021-01-14 17:54:36 | INFO | Partner #2: 9000 samples with labels [4 5] 2021-01-14 17:54:36 | INFO | Partner #3: 9000 samples with labels [2 3] 2021-01-14 17:54:36 | INFO | Partner #4: 9000 samples with labels [0 1] 2021-01-14 17:54:36 | INFO | Description of data scenario configured: 2021-01-14 17:54:36 | INFO | Number of partners defined: 5 2021-01-14 17:54:36 | INFO | Data distribution scenario chosen: Stratified samples split 2021-01-14 17:54:36 | INFO | Multi-partner learning approach: fedavg 2021-01-14 17:54:36 | INFO | Weighting option: data-volume 2021-01-14 17:54:36 | INFO | Iterations parameters: 20 epochs > 1 mini-batches > 32 gradient updates per pass 2021-01-14 17:54:36 | INFO | Data loaded: cifar10 2021-01-14 17:54:36 | INFO | 45000 train data with 45000 labels 2021-01-14 17:54:36 | INFO | 5000 val data with 5000 labels 2021-01-14 17:54:36 | INFO | 10000 test data with 10000 labels 2021-01-14 17:54:36 | INFO | ## Preparation of model's training on partners with ids: ['#0', '#1', '#2', '#3', '#4'] 2021-01-14 17:54:37 | INFO | Init new model . . .