Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.
The multipartner learning implementation is not well suited for the new package-self-service implementation of the library. It makes kind of hard to implemente new contributivity measurement or distributed learning approach.
One solution may be the use of the actual MultiPartnerLearning class as a super class, and create new classes, fedavg, seq-avg, and so on, which inherit from the MultiPartnerLearning.
So a scenario will generate a fedavg object for instance, and call a .fit function.
Each subclass would override the very class they need to differ from the others.
The multipartner learning implementation is not well suited for the new package-self-service implementation of the library. It makes kind of hard to implemente new contributivity measurement or distributed learning approach.
One solution may be the use of the actual MultiPartnerLearning class as a super class, and create new classes, fedavg, seq-avg, and so on, which inherit from the MultiPartnerLearning. So a scenario will generate a fedavg object for instance, and call a .fit function. Each subclass would override the very class they need to differ from the others.
Any other idea, or comment ?