pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.69k stars 17.92k forks source link

BUG: Pearson correlation outside expected range -1 to 1 #59652

Open madrjor02-bh opened 2 months ago

madrjor02-bh commented 2 months ago

Pandas version checks

Reproducible Example

values = [{'col1': 30.0, 'col2': 116.80000305175781},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': 30.100000381469727, 'col2': 116.8000030517578},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None}]

data = pd.DataFrame(values)
data.corr(method='pearson')

Issue Description

In the code snipped I'm trying to calculate the correlation between a pair of columns. However, when using pearson correlation method for this particular example, the outputted correlation is outside the -1 to 1 expected range.

Expected Behavior

The output of the pearson correlation method should be inside the -1 to 1 range.

Installed Versions

INSTALLED VERSIONS ------------------ commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.9.19.final.0 python-bits : 64 OS : Linux OS-release : 5.10.223-211.872.amzn2.x86_64 Version : #1 SMP Mon Jul 29 19:52:29 UTC 2024 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : C.UTF-8 LANG : C.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.2 numpy : 1.26.4 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : 69.5.1 pip : 24.0 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 3.1.4 IPython : 8.18.1 pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None gcsfs : None matplotlib : 3.9.2 numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None pyarrow : 14.0.1 pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.10.1 sqlalchemy : 2.0.31 tables : None tabulate : None xarray : None xlrd : None zstandard : None tzdata : 2024.1 qtpy : None pyqt5 : None
RaghavKhemka commented 1 month ago

The bug does seem to exist in df.corr

Reason for the Unexpected Behavior

The reason is the Welford's method which is used for calculation which doesn't perform well when the datasets has extremely small differences or precision issues.

Finding Correlation using the standard Pearson Product-Moment Correlation

import pandas as pd
import numpy as np
import math

values = [{'col1': 30.0, 'col2': 116.80000305175781},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None},
 {'col1': 30.100000381469727, 'col2': 116.8000030517578},
 {'col1': None, 'col2': None},
 {'col1': None, 'col2': None}]

data = pd.DataFrame(values)

def corr_coef(X,Y):
    x1 = np.array(X)
    y1 = np.array(Y)
    x_m=x1.mean()
    y_m=y1.mean()
    numer=0
    v1sq=0
    v2sq=0
    for i in range(len(x1)):
        xx = (x1[i]-x_m)
        yy = (y1[i]-y_m)
        numer+=xx*yy
        v1sq+=xx*xx
        v2sq+=yy*yy
    return(numer/(math.sqrt(v1sq*v2sq)))

data = data.dropna()
corr_coef(data.iloc[:,0],data.iloc[:,1])

Result:

-0.7071067811865475

We get the same -0.707.. results if np.corrcoef() is used.

Though this is a rare case to happen in reality, I believe it should be fixed. I can work of fixing the issue by changing the Welford's method to the standard method for df.corr. It might slightly impact the time for running. Not sure if it will be acceptable by the team.

rhshadrach commented 1 month ago

Thanks for the report!

It might slightly impact the time for running.

If it is slight, I think the numerical stability would be valued.

KevsterAmp commented 1 month ago

take