pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.82k stars 17.99k forks source link

BUG: Large dataset exception on arrow string operations #59752

Closed corey-dawson closed 2 months ago

corey-dawson commented 2 months ago

Pandas version checks

Reproducible Example

import pandas as pd
df = pd.read_parquet("mypath/myfile.parquet", engine="pyarrow")
df = df.convert_dtypes(dtype_backend="pyarrow")
len(df[df.full_path.str.endswith("90_WW") ])

Issue Description

On large datasets (24 million rows) getting "pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays error" when performing string operations. with Python 3.11 and pandas 2.2.2. Seems to be related to this pervious issue: https://github.com/pandas-dev/pandas/issues/55606

Traceback (most recent call last):
  File "C:\Users\corey\Analytics_Software\pycharm\PyCharm 2024.1.1\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode
    coro = func()
           ^^^^^^
  File "<input>", line 1, in <module>
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\frame.py", line 4093, in __getitem__
    return self._getitem_bool_array(key)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\frame.py", line 4155, in _getitem_bool_array
    return self._take_with_is_copy(indexer, axis=0)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\generic.py", line 4153, in _take_with_is_copy
    result = self.take(indices=indices, axis=axis)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\generic.py", line 4133, in take
    new_data = self._mgr.take(
               ^^^^^^^^^^^^^^^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\internals\managers.py", line 894, in take
    return self.reindex_indexer(
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\internals\managers.py", line 687, in reindex_indexer
    new_blocks = [
                 ^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\internals\managers.py", line 688, in <listcomp>
    blk.take_nd(
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\internals\blocks.py", line 1307, in take_nd
    new_values = algos.take_nd(
                 ^^^^^^^^^^^^^^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\array_algos\take.py", line 114, in take_nd
    return arr.take(indexer, fill_value=fill_value, allow_fill=allow_fill)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pandas\core\arrays\arrow\array.py", line 1309, in take
    return type(self)(self._pa_array.take(indices))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pyarrow\\table.pxi", line 1052, in pyarrow.lib.ChunkedArray.take
  File "C:\Users\corey\Miniconda3\envs\analytics311\Lib\site-packages\pyarrow\compute.py", line 487, in take
    return call_function('take', [data, indices], options, memory_pool)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pyarrow\\_compute.pyx", line 590, in pyarrow._compute.call_function
  File "pyarrow\\_compute.pyx", line 385, in pyarrow._compute.Function.call
  File "pyarrow\\error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
  File "pyarrow\\error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays

Expected Behavior

string operations work with arrow backend and large datasets

Installed Versions

INSTALLED VERSIONS

commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.11.9.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22621 machine : AMD64 processor : Intel64 Family 6 Model 165 Stepping 2, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 2.2.2 numpy : 2.1.0 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : 72.1.0 pip : 24.2 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 17.0.0 pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None zstandard : None tzdata : 2024.1 qtpy : None pyqt5 : None

rhshadrach commented 2 months ago

Thanks for the report. Are you able to programmatically define a dataset that reproduces the issue, e.g.

size = 25_000_000
df = pd.DataFrame({"a": size * [20 * "90_WW"]}, dtype="string[pyarrow]")
jorisvandenbossche commented 2 months ago

You need to construct a column with more than np.iinfo("int32").max total number of characters (more than 2GB) to run into those issues. See https://github.com/pandas-dev/pandas/issues/56259 for some context (and a reproducer), and that is the reason we decided to use large_string for the future default string dtype.

@corey-dawson short term solution is to cast that column to pyarrow's large string type (df["full_path"] = df["full_path"].astype(pd.ArrowDtype(pa.large-string()))). That uses a bit more memory, but will avoid those issues in general.

Longer term, this is something that needs to be fixed on the pyarrow side, and it actually has already been improved lately. See https://github.com/apache/arrow/issues/25822 and https://github.com/apache/arrow/issues/33049 for related issues. There is a WIP PR at https://github.com/apache/arrow/pull/41700 which will hopefully make it into one of the next releases.

corey-dawson commented 2 months ago

Thanks for the workaround and the upcoming fix information @jorisvandenbossche. Looking forward to continue testing arrow backend and trying to realize performance improvements arrow brings within my jobs. Will be watching out for upcoming releases and close this issue within pandas, as it seems it exists in the arrow repo