mc2-project / federated-xgboost

Federated gradient boosted decision tree learning
68 stars 20 forks source link

How the XgBoost models are aggregated? #15

Closed rkwojdan closed 3 years ago

rkwojdan commented 3 years ago

Hi,

currently I am mostly interested in models aggregation part of Federated Learning. However, I cannot understand how it is done now. I guess it is used with rabit but cannot find in the code any allreduce function or something and how the global model is upadted. As of now I have a feeling it works likes this:

1) XgBoost model 1 is trained on local data 1 2) XgBoost model 1 is input to model XgBoost model 2 which is trained on local data 2 3) Ends when all local data and temporary XgBoost models are used

It resembles online learning scheme.

Could you help me understand how the aggregation of XgBoost models works here?

podcastinator commented 3 years ago

Closing this issue; responded here: https://github.com/mc2-project/secure-xgboost/issues/127