Hello!
First of all, thank you for sharing your code, it has been immensely helpful in experimenting with differential privacy in gradient boosting trees.
I'd like to ask if it would be possible for the authors to share the code that was used to evaluate the model on the test sets.
I'm trying to use this code as a baseline (and reproduce the same test scores as the paper) for some experiments I'm conducting.
I've tried using lgb.train followed by lgb.predict to evaluate the test set, but no matter what changes I apply (change of the number of trees to 10 as suggested in a previous issue, change of budget...) I still get the same scores.
This is the code that I used in run_exp.py (inspired by what I've seen in a previous issue):
Hello! First of all, thank you for sharing your code, it has been immensely helpful in experimenting with differential privacy in gradient boosting trees. I'd like to ask if it would be possible for the authors to share the code that was used to evaluate the model on the test sets. I'm trying to use this code as a baseline (and reproduce the same test scores as the paper) for some experiments I'm conducting. I've tried using
lgb.train
followed bylgb.predict
to evaluate the test set, but no matter what changes I apply (change of the number of trees to 10 as suggested in a previous issue, change of budget...) I still get the same scores. This is the code that I used inrun_exp.py
(inspired by what I've seen in a previous issue):Thanks in advance for the help! @PintOfBitter