LabeliaLabs / distributed-learning-contributivity

Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.
https://www.labelia.org
Apache License 2.0
56 stars 12 forks source link

remove val_loss and val_accuracy evaluation can speed up the keras' .fit() #309

Closed arthurPignet closed 3 years ago

arthurPignet commented 3 years ago

We ask keras to compute the valaccuracy/loss for each gradient step, while logging and saving only the last one. We should remove the evaluation of those from the .fit(), and perform a single evaluate() step at the end of the keras' .fit()


history = partner_model.fit(partner.minibatched_x_train[self.minibatch_index],
                                            partner.minibatched_y_train[self.minibatch_index],
                                            batch_size=partner.batch_size,
                                            verbose=0,
                                            validation_data=self.val_data)    # <-  this slow down the .fit execution, which is the longest function 
                                                                                               #       call in a mpl training

self.history.history[partner_id][key_history][self.epoch_index, self.minibatch_index] = history[key_history][-1]

(this is the actual mplc code, but out of their respective contexts)