LabeliaLabs / distributed-learning-contributivity

Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.
https://www.labelia.org
Apache License 2.0
56 stars 12 forks source link

FedGDO and its variations #325

Closed arthurPignet closed 3 years ago

arthurPignet commented 3 years ago

New mpl methods:

FedGDO stands for Federated Gradient Double Optimization.

This method is inspired from Federated gradient, but with modification on the local computation of the gradient. In this version we use a local optimizer (partner-specific) to do several minimization steps of the local-loss during a minibatch. We use the sum of these weighs-updates as the gradient which is sent to the global optimizer. The global optimizer aggregates these gradients-like which have been sent by the partners, and performs a optimization step with this aggregated gradient. Here three variations of this mpl method are tested.

These methods are tested on this notebook -> https://colab.research.google.com/drive/1CcQpWRpLGldj3iNR7v7Hv2brdwBP3z7D?usp=sharing

Please note that as I am currently working with the notebook, it can change, and be not fully readable Notebook access is currently limited to substra.org, but don't hesitate to come to me for access.

bowni commented 3 years ago

Notes from workgroup meeting on 2021.05.11:

arthurPignet commented 3 years ago

This draft was behind master from too much commits to be easily rebased. I created a new branch, in a new PR See PR #345