pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.91k stars 18.03k forks source link

BUG: memory leak when using read_csv with python engine #59694

Open r1ckdu opened 2 months ago

r1ckdu commented 2 months ago

Pandas version checks

Reproducible Example

import pandas as pd
df = pd.read_csv(filename, sep="\t\t|\t", header=5, engine='python') # file is about 250 MB on disk
## memory used: 3.5GB
df.memory_usage() ## 14294152bytes x 24 columns == 327MB
from copy import deepcopy
df2 = deepcopy(df)
## memory 3.9GB
del df
## memory 3.5GB
del df2
## memory 0.1GB

Issue Description

Somehow there is a memory usage of 3.5GB when loading this 250MB file. I understand that pandas keeps metadata but somehow this does not seem reasonable.

When doing pd.read_csv(filename, sep="\t", header=5) (only \t and not engine='python') the memory usage is 416.764KB) which I think is reasonable.

Expected Behavior

Memory usage is the same when using both backends. Deepcopy of df and deleting df should remove data from memory that is not relevant to the values in the table.

Installed Versions

D:\miniforge3\lib\site-packages_distutils_hack__init__.py:32: UserWarning: Setuptools is replacing distutils. Support for replacing an already imported distutils is deprecated. In the future, this condition will fail. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml warnings.warn(

INSTALLED VERSIONS

commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.10.14.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22621 machine : AMD64 processor : AMD64 Family 23 Model 96 Stepping 1, AuthenticAMD byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : English_Germany.1252

pandas : 2.2.2 numpy : 2.0.1 pytz : 2024.1 dateutil : 2.9.0 setuptools : 72.1.0 pip : 24.2 Cython : None pytest : 8.3.2 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 3.1.4 IPython : 8.26.0 pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None gcsfs : None matplotlib : 3.9.1 numba : None numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None pyarrow : None pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.14.0 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None zstandard : 0.23.0 tzdata : 2024.1 qtpy : None pyqt5 : None

rhshadrach commented 2 months ago

Thanks for the report. Can you share a reproducible example - we cannot reproduce without the file. It does not need to be so large, and even better if you can construct the file programmatically, e.g.

df = pd.DataFrame(...)
df.to_csv("test.csv")