Open bifani opened 8 years ago
Hi, Simone.
You're doing somewhat strange and expect algorithms to do the things they can't know about.
Cross-validation of machine learning is easy when you have some figure of merit (ROC AUC, MSE, classification accuracy). In this case evaluation is quite straghtforward.
However in case of reweighting, correct validation requires 2 steps:
(Also, is there any reason to optimize parameters automatically?)
Hi,
OK, let me try to clarify what the situation is
I have played a bit with the hyper parameters and ended up using the following configuration
GBReweighterPars = {"n_estimators" : 200,
"learning_rate" : 0.1,
"max_depth" : 4,
"min_samples_leaf" : 1000,
"subsample" : 1.0}
However, when I use different samples with much less stats I am afraid the above are far from being optimal, e.g. too many n_estimators, causing the to misbehave Rather than trying by myself other settings, I was wondering if there is an automated way to study this
In particular, after having created the reweighter I do compute the ROC AUC on a number of variables of interest, which I could use a FoM Would that be useful?
Thanks
Would that be useful?
Not really. 1-dimensional discrepancies are not all discrepancies.
You can drive 1-dimensional ROC AUCs to 0.5 with max_depth=1, but you'll not cover any non-trivial difference between distributions.
(Well, you can use it as a starting point, and then check results using step 2, but completely no guarantees can be done for this approach)
OK, therefore how do you suggest to pick up the hyper parameters?
If you really want to automate this process, you need to write evaluation function which encounters both steps 1) and 2) mentioned above. E.g. sum over KS(featuture_i) + abs(ROC AUC classifier - 0.5)
As for me: I pick relatively small number of trees 30-50, select leaf size and regularization accordingly to the dataset and play with depth (2-4) and learning rate (0.1-0.3). I stop when I see that I significantly reduced discrepancy between datasets. There are many other errors to be encountered in the analysis and trying to minimize only one of those to zero isn't a wise strategy.
Hi,
I am using this package to reweight MC to look like sPlotted data, and I would like to scan the hyper parameters to look for the best configuration scikit tools are available for this (e.g. GridSearchCV or RandomizedSearchCV), but I am having troubles interfacing the two packages Has anyone done that? Are there alternative ways within hep_ml?
In particular, I have my pandas DataFrame for the original and target samples and I am trying something like
but I get the following error
However, I am not sure how to set the score method for GBReweighter
Any help/suggestions/examples would be much appreciated