Open JohnM-TX opened 9 years ago
In recent Springleaf competition I tested something called margin predictions from xgboost. We tried the following approach.
1 - Made different group of features by intuition. 2 - Ran xgb and predicted CV margin for each set of features. 3 - Used margin as feature when making Ensemble.
You may not have seen importance of some features but lot of times running model on different sets of feature help in Ensemble a lot. My worst model in Springleaf was ExtraTrees but it was the most significant model in Ensemble. Margins created on set of variables which were not so significant also gave good lift in final Ensemble.
Thakur, is that like an "offset" in R? It kinda looks like that in the XGBoost documentation, but I can't quite tell. An offset winds up being something you put in that the model should take into account as something like a linear weight. The model is constrained to use it as-is rather than adjust the coefficient. In a regular linear model, that's almost equivalent to subtracting it from the target, but it's often used in insurance for them to put a number of policy holders into a poisson model, which will then multiply by the other factors to get the predicted number of claims. We implemented it in H2O and I have tried it often, but have yet to see it be useful for this tactic, though it makes a lot of sense to try.
So....when you use it, you put a prediction vector from some other model into the base_margin parameter? So you pass label and base_margin?
Edit: I don't think I have that right with the explanation of step 2 and 3. I would have thought 2 would be a full model and 3 would be a new model where label is the real targets and base_margin was the output of the model from step 2. How does ExtraTrees (assuming scikit version) work with step 2?
Building on the other thread's overview of features, here is my experience with features so far. Most of these may be covered in your models but perhaps there is something new here?
This is the importance plot from xgb in R. As one might imagine, the precip estimate based on reflectivity(Ref) provides the most gain.
UPDATED 11/6
Here are code descriptions: bigflag = mean(bigflag, na.rm = T), precip = sum(timespans * rate, na.rm = T),
precipC = sum(timespans * rateC, na.rm = T), ratemax = max(rate, na.rm = T),
ratesd = sd(rate, na.rm = T), rd = mean(radardist_km, na.rm = T),
rdxref = mean(radardist_km * Ref, na.rm = T), rdxrefc = mean(radardist_km * RefComposite, na.rm = T), records = .N, ref1 = mean(Ref_5x5_10th, na.rm = T), ref1sq = mean(Ref_5x5_10th^2, na.rm = T),
ref5 = mean(Ref_5x5_50th, na.rm = T), ref5sq = mean(Ref_5x5_50th^2, na.rm = T), ref9sq = mean(Ref_5x5_90th^2, na.rm = T), refc1sq = mean(RefComposite_5x5_10th^2, na.rm = T),
refc9sq = mean(RefComposite_5x5_90th^2, na.rm = T), refcdivrd = mean(RefComposite / radardist_km, na.rm = T), refcratio2 = mean((RefComposite_5x5_90th-RefComposite_5x5_10th)/RefComposite, na.rm = T), refcsd = sd(RefComposite, na.rm = T), refdiff = mean(Ref_5x5_50th-Ref, na.rm = T), refdivrd = mean(Ref / radardist_km, na.rm = T), refmissratio = sum(is.na(Ref))/.N refratio2 = mean((Ref_5x5_90th-Ref_5x5_10th)/Ref, na.rm = T), refsd = sd(Ref, na.rm = T), target = log1p(mean(Expected, na.rm = T)), wref = mean(timespans * Ref, na.rm = T),
wrefc = mean(timespans * RefComposite, na.rm = T), zdr5 = mean(Zdr_5x5_50th, na.rm = T),