Closed mdboom closed 2 years ago
Tests don't pass.
How does merge_profile_stats() work? Does it compute the average of N processes timings? Or does it accumulate time?
Is it more reliable than running a single process?
Tests don't pass.
Noted. Wanted to get some feedback about the general concept here first.
How does merge_profile_stats() work? Does it compute the average of N processes timings? Or does it accumulate time?
Is it more reliable than running a single process?
Yes, it just accumulates timings from multiple profiling runs. It produces more accurate results, in the same way that running the benchmarks multiple times does.
I'm not sure I understand the request.
The temporary file only comes into play when using bench_process
. In that case, the temporary filename is generated in parent process and passed to the child process being benchmarked specifically to make it easier to clean up because it could crash. (The temporary file is deleted whether the worker process succeeds or fails).
For other kinds of benchmarks, there is no temporary file involved -- each child worker process merges their results directly into the output file. That has a separate problem in that there is a race condition if multiple workers update that file at the same time, but the whole design here is to not benchmark things in parallel, so that should be ok.
This certainly could use a pipe to communicate all the profiling data to the parent process -- all platforms already use that to communicate benchmark results from the worker processes. But it would add complexity to a bunch of places since the "protocol", which right now dumps things directly into the master benchmarking results, would have to split things out into separate files for the profiling results.
cc @corona10
cc @pablogsal
I will left review by this weekend cc @vstinner
I am going to release the new version of pyperf in 7days after this PR is merged. cc @vstinner
Please update the following documentation.
@corona10: I already did that in this PR. Is there something specific missing there that you'd like to see?
This should make it much easier to collect profiles for benchmarks in the pyperformance suite.
Implements #133.
TODO: