apache / arrow

Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
https://arrow.apache.org/
Apache License 2.0
14.42k stars 3.51k forks source link

[Python][Docs] Provide guidelines for expectations about and how to investigate memory usage #36378

Open jorisvandenbossche opened 1 year ago

jorisvandenbossche commented 1 year ago

We regularly get reports about potential memory usage issues (memory leaks, ..), and often we need to clarify expectations around what is being observed or give hints on how to explore the memory usage. Given this comes up regularly, it might be useful to gather such content so we can point to that page instead of every time having to repeat it.

(recent examples: https://github.com/apache/arrow/issues/36100, https://github.com/apache/arrow/issues/36101)

Some aspects that could be useful to mention on such a page:

Other things we could add?

cc @westonpace @pitrou @AlenkaF @anjakefala

anjakefala commented 10 months ago

It might be good to include a link to: https://arrow.apache.org/docs/python/pandas.html#reducing-memory-use-in-table-to-pandas

anjakefala commented 9 months ago

All of these are notes from @jorisvandenbossche, that are possibly relevant for this document:

If you have a pandas table with strings, those won't show up (by default) in the pandas memory usage indicator:

In [1]: df = pd.DataFrame({"col": ["some", "strings", "in", "my", "dataset"]*100})

In [2]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 1 columns):
 #   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 
 0   col     500 non-null    object
dtypes: object(1)
memory usage: 4.0+ KB

In [3]: df.memory_usage()
Out[3]: 
Index     132
col      4000
dtype: int64

In [4]: df.memory_usage(deep=True)
Out[4]: 
Index      132
col      30700
dtype: int64

In [5]: df.info(memory_usage="deep")
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 1 columns):
 #   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 
 0   col     500 non-null    object
dtypes: object(1)
memory usage: 30.1 KB

Because pandas by default uses a numpy object-dtype array, i.e. literally an array of Python objects, the memory usage indicator by default only shows the size of the numpy array itself, i.e. just the pointers to the Python objects. This is because iteratively querying the size of all the Python objects in this array can be very costly. So to get the real memory usage, you have to pass deep=True to the memory_usage() method.


Verify what table.get_total_buffer_size() returns (this can give a different number than nbytes, in case that table's buffers are a slice of a bigger dataset. table.nbytes only returns the bytes that are actually used for table, while get_total_buffer_size() gives the total size of the referenced buffers, disregarding any offsets).


More info for split_blocks=True:

Nowadays there are not that many operations that trigger consolidation, except for copying a dataframe (copy(), but this can also be done under the hood in other operations that return a copy).

In practice, the consolidation means that columns of the same data type are stored in a single 2D array. So assume that you have a pyarrow table with 10 float columns, which in pyarrow is represented by 10 separate 1D arrays. When converting to pandas, those 10 1D arrays are converted (and thus copied) into a single 2D array.

But if you have this kind of consolidation, that also means that it can roughly only give "double" the memory usage of the original table (given that the new 2D array will have the same total memory as the 10 1D arrays).


Some reasons that a pandas dataframe can be a lot bigger than the table that created it: if you have nested data such as structs, those get converted to dictionaries stored in a pandas column, so that is much less efficient, or boolean values are stored as bytes instead of bits, so that's a size x8)


table.schema to get the types of the columns, which is important for diagnosing memory footprint confusion.

anjakefala commented 8 months ago

https://arrow.apache.org/docs/python/generated/pyarrow.log_memory_allocations.html

pa.log_memory_allocations() will communicate when we request for, and free memory.

https://github.com/bloomberg/memray with --native flag.

There might be differences between the amount of memory Arrow requests, and how much the allocator allocates.

anjakefala commented 8 months ago

Expected differences in behaviour between memory allocators: https://issues.apache.org/jira/browse/ARROW-15730

anjakefala commented 8 months ago
 MIMALLOC_SHOW_STATS=1 python program.py 

will show additional logging for mimalloc specifically.

anjakefala commented 6 months ago

Add the information in this thread: https://github.com/apache/arrow/issues/40301