pola-rs / polars

Dataframes powered by a multithreaded, vectorized query engine, written in Rust
https://docs.pola.rs
Other
26.63k stars 1.63k forks source link

Excessive Memory Consumption During Rolling Operations on Large DataFrames #16052

Open dguidara opened 1 week ago

dguidara commented 1 week ago

Checks

Reproducible example

import polars as pl
from datetime import datetime
import numpy as np
import random, string

def randomword(length):
   letters = string.ascii_lowercase
   return ''.join(random.choice(letters) for i in range(length))

dates = pl.datetime_range(datetime(2000, 1, 1), datetime(2022, 3, 1), "1d", eager=True).alias(
    "date"
)

categories = []
for i in dates:
    for counter in range(1, 2000):
        categories.append(randomword(6))

all_dates = pl.concat([dates for i in range(1, 2000)])

cats = pl.Series(categories)
df = pl.DataFrame({"dates": all_dates , "categories": cats})
df = df.with_columns(value = pl.lit(np.random.rand(df.height))) 
interval = "5y"
df = df.sort(by="dates")
grouped_dynamic = df.sort(by="dates") .rolling(index_column="dates", period=interval, group_by='categories').agg([
    (pl.col('value')  + 1).cum_prod().shift(fill_value=1).cum_sum().last()])
print(grouped_dynamic)

Log output

N/A

Issue description

Issue Description:

During the execution of rolling operations on a large window, memory consumption increases significantly, ranging from 3 to 10 times the initial memory footprint. The operation involves a DataFrame comprising approximately 2000 categories spanning a 22-year period, with an initial memory footprint of around 1 to 2 gigabytes on the user's machine. However, upon executing rolling and aggregation functions, memory utilization escalates dramatically, potentially exhausting all available system memory resources. Notably, on Windows platforms, the memory remains at an elevated level even after the operation completes.

Steps to Reproduce:

Instantiate a DataFrame with approximately 2000 categories representing data spanning a 22-year period. Execute rolling and aggregation operations on the DataFrame. Monitor memory consumption during the operation.

Expected behavior

Memory consumption should remain within reasonable bounds relative to the initial size of the DataFrame, even during intensive rolling and aggregation operations.

Installed versions

``` --------Version info--------- Polars: 0.20.23 Index type: UInt32 Platform: Windows-10-10.0.22631-SP0 Python: 3.9.10 (tags/v3.9.10:f2f3f53, Jan 17 2022, 15:14:21) [MSC v.1929 64 bit (AMD64)] ----Optional dependencies---- adbc_driver_manager: 0.11.0 cloudpickle: 3.0.0 connectorx: 0.3.2 deltalake: 0.17.3 fastexcel: 0.10.4 fsspec: 2023.12.2 gevent: 24.2.1 hvplot: 0.9.2 matplotlib: 3.5.1 nest_asyncio: 1.5.4 numpy: 1.22.1 openpyxl: 3.0.10 pandas: 1.4.0 pyarrow: 16.0.0 pydantic: 2.7.1 pyiceberg: 0.6.1 pyxlsb: sqlalchemy: 2.0.29 xlsx2csv: 0.8.2 xlsxwriter: 3.0.2 ```