scikit-learn / scikit-learn

scikit-learn: machine learning in Python
https://scikit-learn.org
BSD 3-Clause "New" or "Revised" License
59.94k stars 25.38k forks source link

Allow `RandomForest*` and `ExtraTrees*` to have a higher max_samples than 1.0 when `bootstrap=True` #28507

Open adam2392 opened 8 months ago

adam2392 commented 8 months ago

Describe the workflow you want to enable

Currently, random/extra forests can bootstrap sample the data such that max_samples \in (0.0, 1.0]. This enables an out-of-bag sample estimate in forests.

However, this only allows you to sample in principle up to at most 63% unique samples and then 37% of unique samples are for out-of-bag estimation. However, you should be able to control this parameter to a proportion greater. For instance, perhaps I want to leverage 80% of my data to fit each tree, and 20% to estimate oob performance. This requires one to set max_samples=1.6.

Beyond that, no paper suggests that 63% is required cutoff for bootstrapping the samples in Random/Extra forest. I am happy to submit a PR if the core-dev team thinks the propose solution is simple and reasonable.

See https://stats.stackexchange.com/questions/126107/expected-proportion-of-the-sample-when-bootstrapping for a good reference and explanation.

Describe your proposed solution

The proposed solution is actually backwards-compatable and adds minimal complexity to the codebase.

  1. We change https://github.com/scikit-learn/scikit-learn/blob/38c8cc3bab151b76ed890a4b690871e0fa404426/sklearn/ensemble/_forest.py#L95-L125 to the following LOC:

def _get_n_samples_bootstrap(n_samples, max_samples): """ Get the number of samples in a bootstrap sample.

The expected total number of unique samples in a bootstrap sample is
required to be at most ``n_samples - 1``.
This is equivalent to the expected number of out-of-bag samples being at
least 1.

Parameters
----------
n_samples : int
    Number of samples in the dataset.
max_samples : int or float
    The maximum number of samples to draw from the total available:
        - if float, this indicates a fraction of the total;
        - if int, this indicates the exact number of samples;
        - if None, this indicates the total number of samples.

Returns
-------
n_samples_bootstrap : int
    The total number of samples to draw for the bootstrap sample.
"""
if max_samples is None:
    return n_samples

if isinstance(max_samples, Integral):
    expected_oob_samples = (1 - np.exp(-max_samples / n_samples)) * n_samples
    if expected_oob_samples >= n_samples - 1:
        raise ValueError(
            "The expected number of unique samples in the bootstrap sample"
            f" must be at most {n_samples - 1}. It is: {expected_oob_samples}"
        )
    return max_samples

if isinstance(max_samples, Real):
    expected_oob_samples = (1 - np.exp(-max_samples)) * n_samples
    if expected_oob_samples >= n_samples - 1:
        raise ValueError(
            "The expected number of unique samples in the bootstrap sample"
            f" must be at most {n_samples - 1}. It is: {expected_oob_samples}"
        )
    return max(round(n_samples * max_samples), 1)


Note, we probably want some reasonable control over how large `max_samples` can be relative to `n_samples`. For instance if `max_samples = 10*n_samples`, this results in you pretty much sampling all unique samples per tree and almost no samples for oob computation. Thus a reasonable cutoff is we always allow at least 1 sample to be oob.

- if `max_samples` is an integer -> then it must be `(1 - e^(-max_samples/n_samples)) * n_samples > 1`
- if `max_samples` is a float -> then it must be that `(1 - e^(-max_samples)) * n_samples > 1` (i.e. you are expected to sample at least 1 sample out of bag). 

Alternatively, we can impose a reasonable heuristic of 5 samples. I think regardless it works for most use-cases because people would typically want to change the in-bag percentage from 63% to say 80% or 90% at most, but not 99.99%

### Describe alternatives you've considered, if relevant

There is no other way of allowing this functionality without forking the code.

### Additional context

This allows flexibility in terms of the trees and may help in supporting other issues that require more fine-grained control over what is in-bag vs oob such as #19710.
glemaitre commented 7 months ago

I don't know if this YAGNI or not. By definition, having max_feature > 1 and bootstrap=True would not be a bootstrap by definition anymore. Is there any concrete example where not having this feature is detrimental.

This allows flexibility in terms of the trees and may help in supporting other issues that require more fine-grained control over what is in-bag vs oob such as https://github.com/scikit-learn/scikit-learn/issues/19710.

Could you provide a bit more background. It might be an interesting context to consider for a decision.

adam2392 commented 7 months ago

I don't know if this YAGNI or not. By definition, having max_feature > 1 and bootstrap=True would not be a bootstrap by definition anymore. Is there any concrete example where not having this feature is detrimental.

I asume you meant max_samples here(?)

Bootstrap is defined as sampling without replacement. Let M be the number of bootstrap samples and N be the number of samples in your dataset. It just happens to be the case that when M == N, you get 63% unique samples in your bootstrap set. However, you can sample M > N, and there is no fundamental reason that wouldn't be considered a bootstrap sample.

Could you provide a bit more background. It might be an interesting context to consider for a decision.

Sure! My collaborators and I commonly do research on random forests because of desirable theoretical guarantees. One of the interesting advances lately has to do with what we can do with out-of-bag samples. Out-of-bag samples is inversely related to your in-bag samples, and rn sklearn constrains the trade-off you're allowed to make here.

More concretely, we are interested in hypothesis testing with random forests by using the test-statistic estimated on out-of-bag samples. It is possible to leverage out-of-bag samples to estimate a test statistic per tree and average across trees. However, there is a trade-off between how good your estimate of the test statistic is (# of out-of-bag samples) and how well your tree is fit (# of in-bag samples). By default sklearn upper-bounds how many out-of-bag samples you are allowed (i.e. 1.0 - 0.63 = 0.37). However, we want to estimate out-of-bag statistics on 20% of the data, not 37% of the data. This requires one to bootstrap sample M = 1.6 * N times, which then on expectation results in out-of-bag estimates on 20% of the data and in-bag training of the tree on 80% of the data.

Note we're setting bootstrap to True, so there is no way to hack this by using say StratifiedShuffleSplit cv object because that would then make each forest (not tree) have different in-bag/out-of-bag data. We want different in-bag/out-of-bag data per tree, so the easiest thing to do is a simple change in sklearn. I've sketched out that there is only a few LOC needed to be changed (mostly due to error checking).

Lmk if I can elaborate further!

adam2392 commented 4 months ago

To add to the motivation and use-cases described above: The typical training paradigm of 80/20 split used in machine learning is pretty sensible. The train/test split used in cross-validation can be seen as analogous to inbag/oob splits. However, inbag/oob splits can occur at most with a 63/37 split due to max_samples being constrained to at most 1.0.

It's a pretty simple LOC/logic change, backwards-compatible and allows more flexibility in trading off oob-estimates and training the trees.