scikit-learn / scikit-learn

scikit-learn: machine learning in Python
https://scikit-learn.org
BSD 3-Clause "New" or "Revised" License
60.09k stars 25.4k forks source link

DecisionTreeClassifier has random behaviour with splitter="best"? #2386

Closed ndawe closed 11 years ago

ndawe commented 11 years ago

See below:

from sklearn.datasets import make_classification
from sklearn.grid_search import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import KFold

X, y = make_classification(random_state=0)

est_parameters = {
    "max_depth": range(2, 7)}

def best_est(X, y):
    cv = KFold(y.shape[0], n_folds=2, random_state=0)
    # default tree uses gini and best split
    est = DecisionTreeClassifier()
    grid_search = GridSearchCV(est, est_parameters, cv=cv)
    grid_search.fit(X, y)
    return grid_search.best_score_, grid_search.best_params_

for i in xrange(10):
    print(best_est(X, y))
    # why is the best tree different each time
    # unless I set the random_state in DecisionTreeClassifier() ?

output:

(0.71999999999999997, {'max_depth': 3})
(0.68000000000000005, {'max_depth': 3})
(0.73999999999999999, {'max_depth': 6})
(0.70999999999999996, {'max_depth': 3})
(0.70999999999999996, {'max_depth': 3})
(0.70999999999999996, {'max_depth': 3})
(0.72999999999999998, {'max_depth': 3})
(0.69999999999999996, {'max_depth': 6})
(0.69999999999999996, {'max_depth': 3})
(0.70999999999999996, {'max_depth': 5})

ping @glouppe

arjoly commented 11 years ago

During the search for the best split, features are shuffled at each node. Thus the order of feature evaluation might change from one run to the other. If there is a tie between several splits, the chosen split might differ if the random state isn't set.

If I set the random_state to 0, I got

(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
ndawe commented 11 years ago

Yeah, I get deterministic behaviour too (and the same output as you) when setting the random state. I am a bit surprised to see any random behaviour at all in the default DecisionTreeClassifier. It could be that there are tied variables. Good point. Maybe the variables should not be shuffled? Any thoughts on that?

ndawe commented 11 years ago

Hmm... or maybe it is a nice feature to shuffle the variables at each node and I am just used to a more deterministic implementation elsewhere... If this really is where the random behaviour is manifesting itself, then I think I'm fine with it. I suppose shuffling the features reduces the greediness of the algorithm somewhat.

pprett commented 11 years ago

I think Arnaud is right - the order in which variables are searched is random -- we could also set a different tie selection; instead of selecting the first encountered variable in case of a tie, we could select the lowest feature id...

2013/8/23 Noel Dawe notifications@github.com

Yeah, I get deterministic behaviour too (and the same output as you) when setting the random state. I am a bit surprised to see any random behaviour at all in the default DecisionTreeClassifier. It could be that there are tied variables. Good point. Maybe the variables should not be shuffled? Any thoughts on that?

— Reply to this email directly or view it on GitHubhttps://github.com/scikit-learn/scikit-learn/issues/2386#issuecomment-23147955 .

Peter Prettenhofer

glouppe commented 11 years ago

Yes Arnaud is right, this comes from ties on the features to split on (this happens more often than one may think).

ndawe commented 11 years ago

OK, thanks for the confirmation! I'm happy with the feature shuffling. Closing this issue.