Closed matus-tomlein closed 2 years ago
@mscwilson Hey Miranda, I thought I'd tag you here in case this is interesting for you since you were working on similar performance testing tasks. No need to review the C++ code, probably more interesting is the general approach and procedure for performance testing. If you have any suggestions or would like to chat about it, let me know.
For the Python comparison table, can you add a column for n
? Would give more context to the mean and other values.
Thanks for the comments, both!
Miranda, I changed the Python script to also show the number of operations and threads.
Alex, I don't have a clear idea yet how to integrate with CI. It would be ideal to run the tests on every tagged release, but not sure how to store the results of the test. One option would be to add it as asset to the release as Paul suggested yesterday. It requires some more thought, let's first see how useful the tests are as we change stuff in the tracker.
Issue: #37
Overview
This PR adds a new executable into the project that runs simple performance tests and logs their output. The goal is to be able to measure effect of changes in the tracker code on performance.
The tests only measure performance of tracking events without sending events over network. The idea is to measure the impact of the tracker on the "main" thread or the thread where events are tracked. The performance tests do not account for asynchronous operations (such as sending events).
The aim is to evaluate performance by comparison. There are several comparison that the tests enable:
performance/logs.txt
file and there is a script that extracts statistics from the historical and last performance.Each variation runs the same procedure – it tracks 10 000 timing, 10 000 screen view, 10 000 structured events in 5 threads in parallel (i.e., 150 000 events).
Running the performance tests
To run performance tests under macOS, you will need to build the project using
make
and run the tests:Performance logs
Each run of the tests appends a JSON to the
performance/logs.txt
file with desktop context describing the current machine and statistics extracted from the test. One measurement looks like this:Comparing with historical performance
There is a simple Python script that compares the last run with historical performance. It outputs a table with several statistics.
To run the comparison script:
This will generate a table something like this: