pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.84k stars 18k forks source link

BUG: pd.cut precision doesn't work as expected #51532

Open Gabriel-Kissin opened 1 year ago

Gabriel-Kissin commented 1 year ago

Pandas version checks

Reproducible Example

import pandas as pd
import numpy as np
print(pd.__version__)

interval_testing = pd.DataFrame(columns=['data', 'interval', 'data_in_interval'],)

interval_testing.data = np.linspace(0,1,100) + 0.000499

interval_testing.interval = pd.cut(interval_testing.data, bins=13, precision=2, )
# interval_testing.interval = pd.qcut(interval_testing.data, q=13, precision=2, )

interval_testing.data_in_interval = [(interval_testing.data[i] in interval_testing.interval[i] ) for i in range(len(interval_testing))]

interval_testing.loc[interval_testing.data_in_interval==False]

Issue Description

Intro: pd.cut splits the data into bins. It has a parameter precision which controls the precision of the bins. E.g. if precision=2 then bins will be sthg like (0.02, 0.04] or (0.014, 0.028] (precision uses significant figures, not decimal places).

I had expected that (1) the bins would be rounded, and only then will (2) data be binned into the rounded bins. So all data will be binned correctly.

However the way it seems to do it is to (1) bin the data, and THEN (2) round the bins. The obvious problem with this is that you end up with some datapoints being assigned bins they don't fit into.

The output of the MRE code above shows this: image If in the MRE we set precision=4, all are binned correctly for this particular dataset.

NOTE 1: The same problem exists with pd.qcut which cuts the data into buckets based on data quantiles. In that case it could be argued that that is desirable, so that you have the correct proportion of data in each bin. E.g. if using the quartiles, then the way it currently works means that 25% of the data will get into each bucket. Whereas the way I am suggesting, you can get more or less data in each bucket. However that argument isn't very strong with pd.cut. And in any case, I think that correctly binning data should always be the primary consideration, and size of bins secondary to that.

NOTE 2: the pd.docs state

precision : int, default 3 The precision at which to store and display the bins labels.

Which implies it acts as it does. However it could at least be clearer on this point, most users won't expect incorrectly binned data; and particularly given that by default a precision of 3 is used, users who haven't specified precision at all could get incorrect results.

Expected Behavior

Expected behaviour would be to put e.g. 0.152014 in the bin (0.15, 0.23], not in (0.077, 0.15]. I.e. define the bins first, then do the binning.

Installed Versions

INSTALLED VERSIONS ------------------ commit : 2cb96529396d93b46abab7bbc73a208e708c642e python : 3.9.4.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.19041 machine : AMD64 processor : Intel64 Family 6 Model 126 Stepping 5, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United Kingdom.1252 pandas : 1.2.4 numpy : 1.23.1 pytz : 2021.1 dateutil : 2.8.1 pip : 22.1.2 setuptools : 63.1.0 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : 1.3.8 lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 3.1.2 IPython : 8.4.0 pandas_datareader: None bs4 : 4.9.3 bottleneck : None fsspec : None fastparquet : None gcsfs : None matplotlib : 3.4.1 numexpr : None odfpy : None openpyxl : 3.0.7 pandas_gbq : None pyarrow : None pyxlsb : None s3fs : None scipy : 1.10.1 sqlalchemy : None tables : None tabulate : 0.8.9 xarray : None xlrd : None xlwt : None numba : None
kaguru commented 1 year ago

B

kaguru commented 1 year ago

Kn

Gabriel-Kissin commented 1 year ago

Current workaround:

def dp_quantile(data, quantiles, dp=3, dropna=True, drop_duplicate_quantiles=True):
    '''returns approximate quantiles of data to specified dp

    Useful when cutting data into bins where we want bins to be based on data quantiles but don't want bin boundaries to be arbitratily long.
    Currently, pd.qcut can either give very exact boundaries (using precision=big_number), e.g. (1.02315641563135131, 1.02315641563135147]
    Or it can give rounded boundaries (using precision=small_number), but then it does the binning incorrectly, see https://github.com/pandas-dev/pandas/issues/51532
    This function provides quantiles to specified dp, which can then be used to cut the data with pd.cut.
    NOTE: It works by using floor for the first quantile, ceiling for the last, and round for the intermediary quantiles.
        This can be assumed to generally make the best quantiles such that groups are as equally-distributed as possible.
        This won't always be true, but short of iterating through all options and selecting the best-distributed cutting, this works.
    NOTE 2: depending on the data and dp, some of the quantiles may be equal. In this case, they will be dropped if drop_duplicate_quantiles==True.

    Args:
        data: np.array or similar of data
        quantiles: the quantiles of the data. Anything which can be passed to np.quantiles
        dp  : how many dp the quantiles should be given to
        dropna: drop nan values before computing quantiles
    '''

    if dropna:
        quantiles = np.nanquantile(data, quantiles)
    else:
        quantiles = np.quantile(data, quantiles)

    bins = np.hstack([np.floor(quantiles[0]*10**dp)*10**-dp, 
                                   np.round(quantiles[1:-1], dp), 
                                   np.ceil (quantiles[-1]*10**dp)*10**-dp])

    if drop_duplicate_quantiles==True: bins = np.unique(bins)

    return bins

def qcut_dp(data, quantiles, dp=3, dropna=True, drop_duplicate_quantiles=True):
    '''uses dp_quantile to cut the data into quantiles.

    Note: dropna=True means that the quantiles will be computed without the nans. However this function will still return the data cut with the nans still in it.
    If you don't want that, manually remove nans before passing data to this function using .dropna().

    Args:
        data: np.array or similar of data
        quantiles: the quantiles of the data. Anything which can be passed to np.quantiles
        dp  : how many dp the quantiles should be given to
        dropna: drop nan values before computing quantiles
        drop_duplicate_quantiles : recommended.

    Returns:
        pd Series : the binned data.
    '''
    bins = dp_quantile(data=data, quantiles=quantiles, dp=dp,)
    return pd.cut(x=data, bins=bins, precision=dp)

Then, running the same code as before but using the above function instead of pd.qcut:

import pandas as pd
import numpy as np

interval_testing = pd.DataFrame(columns=['data', 'interval', 'data_in_interval'],)

interval_testing.data = np.linspace(0,1,100) + 0.000499

interval_testing.interval = qcut_dp(interval_testing.data, quantiles=np.linspace(0,1,13+1), dp=2, )

interval_testing.data_in_interval = [(interval_testing.data[i] in interval_testing.interval[i] ) for i in range(len(interval_testing))]

interval_testing.loc[interval_testing.data_in_interval==False]

- returns an empty dataframe, as all values are in the correct intervals.

And this can be verified by inspecting the intervals and their data:

interval_testing[['interval', 'data']].groupby('interval').aggregate(['min', 'max', 'count'])

gives image where everything looks right :-)

topper-123 commented 1 year ago

Thanks for this report @Gabriel-Kissin. Having the values inside the bins should be the expected behavior, and the current behavior can be considered buggy.

Would it be possible to implement your workaround in the pandas code, or is that too simplistic?