NeurodataWithoutBorders / nwb_benchmarks

Benchmarking for NWB-related operations.
https://nwb-benchmarks.readthedocs.io/en/latest/
Other
4 stars 1 forks source link

Support repeat for network and other custom tracking #29

Open oruebel opened 8 months ago

oruebel commented 8 months ago
          > One remaining item may be how we want to handle repeats for network benchmarks, but I think we can deal with hat in a separate Issue/PR.

A follow-up sounds good

TBH I think we could just set a repeat attribute for consistency, and then have a basic for loop that goes around the context + operation, appending a samples: list from each iteration, and that's what it returns - this would produce identical structure in the results to what we see in the timing tests

The only problem might be for any tests that have any kind of caching (including in-memory LRU) the operations would be performed on the same process, and so the first run may have heterogeneous statistics from the rest of the samples 🤔for comparison, the reason the timing tests can repeat so easily is because that's a built-in feature of timeit, which runs on new processes each time

_Originally posted by @CodyCBakerPhD in https://github.com/NeurodataWithoutBorders/nwb_benchmarks/issues/21#issuecomment-1958380268_

oruebel commented 8 months ago

The only problem might be for any tests that have any kind of caching (including in-memory LRU) the operations would be performed on the same process, and so the first run may have heterogeneous statistics from the rest of the samples

I think that depends in part on what we put into the setup method. For the network benchmarks we are controlling what is being measured via the network tracking decorator, i.e., if necessary, we could put setup and clean-up code inside the benchmark function to make sure we have clean repeats