This PR removes the feature_indices from the .fit() function of a DecisionTree, and instead implements max_features where a subset, of size max_features, from the features are chosen at random for each split.
I profiled this and saw that the __get_feature_indices() method I implemented to help implement max_features ran for 2 milliseconds in a run of about 7500 milliseconds. Thus the time spent doing this is miniscule.
I don't think that it is a problem for us with this added running time.
This PR removes the feature_indices from the .fit() function of a DecisionTree, and instead implements max_features where a subset, of size max_features, from the features are chosen at random for each split.
I profiled this and saw that the __get_feature_indices() method I implemented to help implement max_features ran for 2 milliseconds in a run of about 7500 milliseconds. Thus the time spent doing this is miniscule. I don't think that it is a problem for us with this added running time.