Open Gabriel-Kissin opened 1 year ago
B
Kn
Current workaround:
def dp_quantile(data, quantiles, dp=3, dropna=True, drop_duplicate_quantiles=True):
'''returns approximate quantiles of data to specified dp
Useful when cutting data into bins where we want bins to be based on data quantiles but don't want bin boundaries to be arbitratily long.
Currently, pd.qcut can either give very exact boundaries (using precision=big_number), e.g. (1.02315641563135131, 1.02315641563135147]
Or it can give rounded boundaries (using precision=small_number), but then it does the binning incorrectly, see https://github.com/pandas-dev/pandas/issues/51532
This function provides quantiles to specified dp, which can then be used to cut the data with pd.cut.
NOTE: It works by using floor for the first quantile, ceiling for the last, and round for the intermediary quantiles.
This can be assumed to generally make the best quantiles such that groups are as equally-distributed as possible.
This won't always be true, but short of iterating through all options and selecting the best-distributed cutting, this works.
NOTE 2: depending on the data and dp, some of the quantiles may be equal. In this case, they will be dropped if drop_duplicate_quantiles==True.
Args:
data: np.array or similar of data
quantiles: the quantiles of the data. Anything which can be passed to np.quantiles
dp : how many dp the quantiles should be given to
dropna: drop nan values before computing quantiles
'''
if dropna:
quantiles = np.nanquantile(data, quantiles)
else:
quantiles = np.quantile(data, quantiles)
bins = np.hstack([np.floor(quantiles[0]*10**dp)*10**-dp,
np.round(quantiles[1:-1], dp),
np.ceil (quantiles[-1]*10**dp)*10**-dp])
if drop_duplicate_quantiles==True: bins = np.unique(bins)
return bins
def qcut_dp(data, quantiles, dp=3, dropna=True, drop_duplicate_quantiles=True):
'''uses dp_quantile to cut the data into quantiles.
Note: dropna=True means that the quantiles will be computed without the nans. However this function will still return the data cut with the nans still in it.
If you don't want that, manually remove nans before passing data to this function using .dropna().
Args:
data: np.array or similar of data
quantiles: the quantiles of the data. Anything which can be passed to np.quantiles
dp : how many dp the quantiles should be given to
dropna: drop nan values before computing quantiles
drop_duplicate_quantiles : recommended.
Returns:
pd Series : the binned data.
'''
bins = dp_quantile(data=data, quantiles=quantiles, dp=dp,)
return pd.cut(x=data, bins=bins, precision=dp)
Then, running the same code as before but using the above function instead of pd.qcut
:
import pandas as pd
import numpy as np
interval_testing = pd.DataFrame(columns=['data', 'interval', 'data_in_interval'],)
interval_testing.data = np.linspace(0,1,100) + 0.000499
interval_testing.interval = qcut_dp(interval_testing.data, quantiles=np.linspace(0,1,13+1), dp=2, )
interval_testing.data_in_interval = [(interval_testing.data[i] in interval_testing.interval[i] ) for i in range(len(interval_testing))]
interval_testing.loc[interval_testing.data_in_interval==False]
- returns an empty dataframe, as all values are in the correct intervals.
And this can be verified by inspecting the intervals and their data:
interval_testing[['interval', 'data']].groupby('interval').aggregate(['min', 'max', 'count'])
gives where everything looks right :-)
Thanks for this report @Gabriel-Kissin. Having the values inside the bins should be the expected behavior, and the current behavior can be considered buggy.
Would it be possible to implement your workaround in the pandas code, or is that too simplistic?
Pandas version checks
[X] I have checked that this issue has not already been reported.
[X] I have confirmed this bug exists on the latest version of pandas.
[X] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Intro:
pd.cut
splits the data into bins. It has a parameterprecision
which controls the precision of the bins. E.g. ifprecision=2
then bins will be sthg like (0.02, 0.04] or (0.014, 0.028] (precision uses significant figures, not decimal places).I had expected that (1) the bins would be rounded, and only then will (2) data be binned into the rounded bins. So all data will be binned correctly.
However the way it seems to do it is to (1) bin the data, and THEN (2) round the bins. The obvious problem with this is that you end up with some datapoints being assigned bins they don't fit into.
The output of the MRE code above shows this: If in the MRE we set
precision=4
, all are binned correctly for this particular dataset.NOTE 1: The same problem exists with
pd.qcut
which cuts the data into buckets based on data quantiles. In that case it could be argued that that is desirable, so that you have the correct proportion of data in each bin. E.g. if using the quartiles, then the way it currently works means that 25% of the data will get into each bucket. Whereas the way I am suggesting, you can get more or less data in each bucket. However that argument isn't very strong with pd.cut. And in any case, I think that correctly binning data should always be the primary consideration, and size of bins secondary to that.NOTE 2: the pd.docs state
Which implies it acts as it does. However it could at least be clearer on this point, most users won't expect incorrectly binned data; and particularly given that by default a precision of 3 is used, users who haven't specified precision at all could get incorrect results.
Expected Behavior
Expected behaviour would be to put e.g. 0.152014 in the bin (0.15, 0.23], not in (0.077, 0.15]. I.e. define the bins first, then do the binning.
Installed Versions