Closed angelaparodymerino closed 6 years ago
From our paper Do little interactions get lost in dark random forests?
The basic idea of the permutation variable importance approach [18] is to consider a variable important if it has a positive effect on the prediction performance. To evaluate this, a tree is grown in the first step, and the prediction accuracy in the OOB observations is calculated. In the second step, any association between the variable of interest x_i and the outcome is broken by permuting the values of all individuals for x_i, and the prediction accuracy is computed again. The difference between the two accuracy values is the permutation importance for x_i from a single tree. The average of all tree importance values in a random forest then gives the random forest permutation importance of this variable. The procedure is repeated for all variables of interest. The package ranger [24] was used in our analyses.
I'm getting a little hung up on the nomenclature used in this CV post. Is the permutation importance method different from a "conditional Random Forest"?
Yes, in a conditional inference forest, a permutation test is used for splitting, while the permutation importance uses permutations to calculate variable importance.
Thank you for the clarification. Is it possible to incorporate this splitting rule within ranger::train()
? So far, I've only found the partykit::cforest()
function for performing conditional inference forests, but my workflow is dependent on the efficiency of ranger given the scale of my data.
If you have a regression, survival or binary classification problem, you could use splitrule = "maxstat"
for splitting with maximally selected rank statistics, which is an approximation of the conditional inference splitting, see our paper Unbiased split variable selection for random survival forests using maximally selected rank statistics for details. Ask me for the PDF if you don't have access.
Thanks for the info! I'm able to access your paper as well.
@mnwright
Strobl et al. [68] show that the default option (in ranger) of sampling with replacement leads to a Variable Importance Measure bias in favor of predictors with many categories even if the trees are built using an unbiased criterion.
Two questions:
Does this bias persist even when using the "maxstat" split rule, and thus approximating conditional inference forests which do not suffer from the bias of selecting predictors with many unique values?
Do you have any guidance on whether to sample with or without replacement when using the "maxstat" split rule? I'm looking to cite evidence in favor of the default method in most rf implementations (with replacement), but have been unable to find anything. In the party package, conditional inference forests use a without-replacement sample of size 0.632 * n. Do you think this is a good default value to use for maxstat trees?
Citation: C. Strobl, A. L. Boulesteix, A. Zeileis, and T. Hothorn. Bias in random forest variable importance measures: Illustrations, sources and a solution. BMC Bioinformatics, 8:25, 2007.
Yes, this "bootstrapping bias" is also apparent for the "maxstat" splitting. See Fig.4 in our Stat Med paper*.
If prediction accuracy is your major concern, there is not much of a difference in my experience. See also Figs. 4 and 5 of our Stat Med paper (linked above). If any kind of model interpretation is your goal, subsampling might be the better choice (to avoid this kind of bias).
The choice of 0.632 * n (aka 1-exp(-1)) is quite common because this is the number of unique observations if sampled with replacement. It is a reasonable default but might be improved by tuning. See also Section 2.1.2 of our recent paper on hyperparameters** and Philipp et al.'s paper on tunability.
*Free preprint: https://arxiv.org/pdf/1605.03391
**Free preprint: https://arxiv.org/pdf/1804.03515
If you have a regression, survival or binary classification problem, you could use
splitrule = "maxstat"
for splitting with maximally selected rank statistics, which is an approximation of the conditional inference splitting, see our paper Unbiased split variable selection for random survival forests using maximally selected rank statistics for details. Ask me for the PDF if you don't have access.
In ranger 0.11.2, I'm getting "Error: maxstat splitrule applicable to regression or survival data only" when I try to use splitrule = "maxstat"
for binary classification. Am I missing something?
You'll have to grow a regression forest on 0/1 outcomes. For two classes, this is equivalent to classification with probability = TRUE
. The maxstat
split rule works in this way, but it's not well tested.
A follow-up question: I was trying to find the answer in the code, but I got lost between the trees: how is permutation feature importance calculated in the case of a regression task; the decrease of which measure is used instead of accuracy (by default)?
It's the mean squared error.
I found my way to this issue while trying to build a fast, unbiased random forest on a binary outcome hoping, eventually, to do some model inference using a variable importance metric that is robust to correlations among predictors. (For posterity, the Sobol-MDA approach to deal with correlated predictors led me to this other {ranger} issue, which led me to "conditional predictive impact" and the {cpi} package). I learned a lot! Thanks for all the links and suggestions.
In trying to implement this, I am hoping to account for class imbalance too. When I switch from a classification problem of the binary outcome to a regression problem with a binary outcome to effectively grow a probability-like random forest, I lose the ability to include class.weights
in the model building. I've seen some implementations where the case.weights
argument is used to stand in for class.weights
by assigning each observation a weight corresponding to the inverse of its frequency (1 / table(data$target)
). That does seem to improve model performance, but some trial-and-error leads me to believe that this might be doing something that I don't intend (see https://github.com/BlasBenito/spatialRF/issues/12).
Is there a way that you would recommend accounting for class imbalance in a 0/1 outcome regression problem? Particularly a way that can be implemented in {mlr3} so that I can use {cpi}?
I think you are looking for probability trees/forests. In ranger, define it as a classification outcome (i.e. factor) and set probability = TRUE
in ranger()
. In mlr3 just use a classification task and set predict_type = "prob"
in the lrn()
call (there is an example in the cpi package).
Thanks so much for your reply! You are right that I am essentially looking for probability trees, but I learned from this issue that I can only use the splitrule = "maxstat"
approach on regression problems. But also that I could use regression trees on a 0/1 outcome to mimic a probability tree.
You'll have to grow a regression forest on 0/1 outcomes. For two classes, this is equivalent to classification with
probability = TRUE
. Themaxstat
split rule works in this way, but it's not well tested.
The ultimate goal is to try to get unbiased variable selection, since I'm interested in the inference about important features, which is what led me to conditional inference trees, and then to your Wright et al., 2017 paper.
So is there an appropriate way to mimic class.weights
from classification/probability trees if I'm constrained to building a regression random forest (so I can use the splitrule = "maxstat"
)?
For posterity, I did learn that I can implement the case.weights
approach in {mlr3} by using the set_col_roles()
method on the task. FIrst, I add a column to my data.frame that represents the weight for each row, then set the role for that column to be "weight": task$set_col_roles(cols = "wt", roles = "weight")
No, class.weights
and maxstat splitting won't work together because the class weights directly change the Gini index splitting rule, see e.g. here: https://github.com/imbs-hl/ranger/blob/5f71872d7b552fd2cf652daab92416f52976df86/src/TreeProbability.cpp#L290.
However, you can use case.weights
because those only change the bootstrapping/subsampling of observations for the trees.
And again, be careful with splitrule = "maxstat"
on 0/1 data. Doing rank statistics with only ties sounds wrong... If you (or anyone else) wants to do some testing, please share!
This is so helpful; thank you very much for your time!
And thanks also for your caveat about splitrule = "maxstat"
on 0/1 data. I can likely provide some testing, though I think the supplemental material of Nembrini 2018 might do a better job than whatever I might have done!
You've likely seen that work, but for posterity: they performed simulations to identify bias in variable importance metrics that arise from bias in the splitting rule. They use 0/1 outcomes to investigate this in the classification scenario, and use splitrule = "maxstat"
with alpha = 1
and minprop = 0
in a {ranger} regression to perform the "classification".
From my read, the results for the 0/1 outcome {ranger} regression using splitrule = "maxstat"
are very encouraging! The approach to mimic classification (i.e., with a "maxstat" split rule and regression on a 0/1 outcome) shows similar behavior (as regression on a continuous target) in terms of avoiding variable selection bias trickling down to the variable importance metrics.
Is there different testing that would make us feel more confident of this?
Yes, that looks good! I would also check the prediction performance, but I'm sure you are already doing that. @snembrini maybe this is interesting for you?
thanks @mnwright for looping me in
we tried maxstat with binary outcomes only for completeness but we never used it for prediction for instance
Now, I would like to clarify and demystify a couple of things
This is great; thanks so much for providing some of this intuition here.
I'm convinced about using the computational power of {ranger} with a maxstat split rule.
I think the only outstanding question in my mind is about dealing with correlated predictors. That seems to be a factor that also plays into which variable importance measure you might pick. In a different issue, it was you that pointed me to CPI for exactly this use case.
Understanding that there is no such thing as totally "unbiased" variable selection, it sounds like more research is needed before using the impurity_corrected
importance metric for correlated predictors which is implemented in {ranger} and detailed in your 2018 paper? But my understanding is also that a Gini impurity-based feature importance is likely to be sensitive to correlated predictors.
Unbiasedness is unachievable for recursive partitioning when you have predictors of different types
Correlation is problematic in general within any statistical framework: CPI would give a higher ranking to uncorrelated predictors, which I think makes sense for most problems, but variabile importance is a vague concept and it really depends on what you want to get from the data.
Correlated variables compete and share some of the same information so for prediction I'd rather pick predictors that are not correlated but that's a personal question
The total reduction in heterogeneity is fine because it measures exactly that, the problem is when you use it on a forest that splits to purity. Splitting criteria and variable importance and prediction are 3 different beasts that have different goals.
"Unbiased" in this case is a misnomer used in machine learning because there is no parameter.
The variable importance we presented in 2018 has the purpose of mimicking the permutation importance, which can be extremely slow for some datasets (even days), and can be computes at basically no cost when the forest is generated
Hope this helps
Hi,
I am quite new running RF and using ranger. I have ~3000 SNPs, 265 individuals, and a quantitative phenotype. My aim is to use RF (ranger) to rank the SNPs by the ones that explain more the phenotype variation in the population (inter-individual variation).
Anyway, I have read that "The most reliable measurement is the (unscaled) permutation importance" (Szymczak et al., 2016). It refers to the most reliable measurement for assess the predictive power of each variable (or SNP), which is what I am after. Therefore, I have to use importance = "permutation" which is the only way to compute permutation importance (right?). But, I don't understand well what is being permuted? I suppose the samples (??) Could someone explain a bit more in detain how permutation in Random Forest work? I really would like to understand!
Thanks in advance for the help,
Regards,
'Angela Parody-Merino