mc2-project / federated-xgboost

Federated gradient boosted decision tree learning
68 stars 20 forks source link

Simulation on a single machine #14

Closed hiteshis closed 3 years ago

hiteshis commented 3 years ago

Is this anyway to simulate this code for different processes on a single machine? I tried using the get_rank to differentiate between the nodes (and their training data) but it throws me an error: Traceback (most recent call last): File "federated-xgboost/demo/basic/demo.py", line 44, in bst = fxgb.train(params, dtrain, num_rounds, evals=[(dtrain, "dtrain"), (dval, "dval")]) File "lib/python3.6/site-packages/federatedxgboost-0.90-py3.6.egg/federatedxgboost/training.py", line 216, in train xgb_model=xgb_model, callbacks=callbacks) File "lib/python3.6/site-packages/federatedxgboost-0.90-py3.6.egg/federatedxgboost/training.py", line 74, in _train_internal bst.update(dtrain, i, obj) File "python3.6/site-packages/federatedxgboost-0.90-py3.6.egg/federatedxgboost/core.py", line 1109, in update dtrain.handle)) File "python3.6/site-packages/federatedxgboost-0.90-py3.6.egg/federatedxgboost/core.py", line 176, in _check_call raise XGBoostError(py_str(_LIB.XGBGetLastError())) federatedxgboost.core.XGBoostError: [16:58:36] federated-xgboost/include/xgboost/tree_model.h:295: Check failed: fi->Read(&param, sizeof(TreeParam)) == sizeof(TreeParam) (0 vs. 148) :

hiteshis commented 3 years ago

The issue is with renewing the certs as pointed out in other issue: https://github.com/mc2-project/federated-xgboost/issues/11