Open itamarst opened 4 months ago
For libraries that use Python's memory allocator APIs out of the box, tracemalloc
from the Python standard library is all you need. See e.g. https://github.com/pola-rs/polars/blob/main/py-polars/tests/unit/conftest.py#L169 for a sample pytest
fixture.
For libraries that use other memory allocators, options include:
pytest-memray
.tracemalloc
registration APIs, e.g. https://github.com/pola-rs/polars/blob/main/py-polars/src/memory.rs. Given performance sensitivity Polars only does this in debug compilation profile, which is what is used to run unit tests during development.
It's easy to have high memory usage without noticing, or to add regressions that increase memory usage. As a developer of a library, I would like to ensure my code doesn't use too much memory; making assertions about memory usage in tests is one way to do this.
The key measure is peak or high water mark memory usage. The memory usage pattern in scientific computing involves spiky large allocations, and the peak is the bottleneck that will drive hardware requirements.
Unlike performance benchmarking, memory measurements can be integrated into existing tests as an additional assertion. It does not require a special setup. So this is actually pretty easy technically, the main constraint is education and cultural norms around what counts as best practices.
High memory usage is a lot harder to notice than slow code, because until you hit a certain threshold and start swapping it doesn't have visible symptoms. But it has a significant financial cost at scale. For an example of regression see e.g. https://github.com/pola-rs/polars/issues/15098: someone introduced a bug and never noticed because main symptom was higher memory usage.