Closed corey-dawson closed 2 months ago
Thanks for the report. Are you able to programmatically define a dataset that reproduces the issue, e.g.
size = 25_000_000
df = pd.DataFrame({"a": size * [20 * "90_WW"]}, dtype="string[pyarrow]")
You need to construct a column with more than np.iinfo("int32").max
total number of characters (more than 2GB) to run into those issues. See https://github.com/pandas-dev/pandas/issues/56259 for some context (and a reproducer), and that is the reason we decided to use large_string for the future default string dtype.
@corey-dawson short term solution is to cast that column to pyarrow's large string type (df["full_path"] = df["full_path"].astype(pd.ArrowDtype(pa.large-string()))
). That uses a bit more memory, but will avoid those issues in general.
Longer term, this is something that needs to be fixed on the pyarrow side, and it actually has already been improved lately. See https://github.com/apache/arrow/issues/25822 and https://github.com/apache/arrow/issues/33049 for related issues. There is a WIP PR at https://github.com/apache/arrow/pull/41700 which will hopefully make it into one of the next releases.
Thanks for the workaround and the upcoming fix information @jorisvandenbossche. Looking forward to continue testing arrow backend and trying to realize performance improvements arrow brings within my jobs. Will be watching out for upcoming releases and close this issue within pandas, as it seems it exists in the arrow repo
Pandas version checks
[X] I have checked that this issue has not already been reported.
[X] I have confirmed this bug exists on the latest version of pandas.
[X] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
On large datasets (24 million rows) getting "pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays error" when performing string operations. with Python 3.11 and pandas 2.2.2. Seems to be related to this pervious issue: https://github.com/pandas-dev/pandas/issues/55606
Expected Behavior
string operations work with arrow backend and large datasets
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.11.9.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22621 machine : AMD64 processor : Intel64 Family 6 Model 165 Stepping 2, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 2.2.2 numpy : 2.1.0 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : 72.1.0 pip : 24.2 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 17.0.0 pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None zstandard : None tzdata : 2024.1 qtpy : None pyqt5 : None