aai-institute / nnbench

A small framework for benchmarking machine learning models.
https://aai-institute.github.io/nnbench/
Apache License 2.0
10 stars 3 forks source link

Implement memo garbage collection #137

Closed nicholasjng closed 6 months ago

nicholasjng commented 6 months ago

Two steps:

1) Add back the compressed parameter representation to nnbench's runner. There is unfortunately no other way around this, since we have no way out after the record gets persisted. 2) Implement another memo cache API, getting a memo ID (or None) for a memoized value. This is needed for eviction in teardown tasks, which are passed the memo values and not the memos themselves.

The documentation on memos was amended to showcase a dealloc in a teardown task.

The repro on a ~1GB array:

import gc
import logging
import numpy as np

import nnbench
from nnbench.types import Memo, cached_memo
from nnbench.types.memo import evict_memo, get_memo_by_value, memo_cache_size

logging.basicConfig()
logger = logging.getLogger("nnbench")
logger.setLevel(logging.DEBUG)

class MyMemo(Memo[np.ndarray]):
    @cached_memo
    def __call__(self) -> np.ndarray:
        return np.random.random_sample((10000, 10000))

def tearDown(state, params):
    logger.debug(f"Current memo cache size: {memo_cache_size()}")
    logger.debug("Evicting memo for benchmark parameter 'a':")
    m = get_memo_by_value(params["a"])
    if m is not None:
        evict_memo(m)
        gc.collect()
    logger.debug(f"New memo cache size: {memo_cache_size()}")

@nnbench.product(a=[MyMemo(), MyMemo(), MyMemo(), MyMemo()], tearDown=tearDown)
def matrixmult(a: np.ndarray, b: np.ndarray):
    return a @ b

if __name__ == "__main__":

    lhs = np.random.random_sample((10000,))

    runner = nnbench.BenchmarkRunner()
    res = runner.run(__name__, params={"b": lhs})
    print(res.benchmarks[0]["parameters"])

Image proof that it works (from the memray flamegraph):

Screenshot 2024-03-27 at 15 46 13

(The deallocs are the orange downward spikes.)

Closes #105.