Attempting to ensure test runs are consistent between generators & easier to grok in reports
Summary
Alright, this is another slightly large change-set, but I think it's worth it for more granular test results. Also as a preface, the AlwaysNew subset of tests are likely just temporary as they feel like good baselines for the discussion. I think it's good to see what effects those ones show PHP's native file/json functions have on the results compared to the libraries logic. So that said, I think we could remove those benchmarks once we are in a more stable period.
Per my comment about how the random numbers were used, I think that was likely making things "too random" between revisions/iterations. As such I've also included a new ID provider that provides a consistent set of IDs every time. This ensures that we can actually compare apples to apples against generators. Since now we have a benchmark for every generator we can say "when int 9 is passed here's the performance".
Benchmark layout
Each benchmark class instance consists of a setUp/tearDown and then the same basic tests. This makes it easy to setup new tests for different style generators just by editing the setUp. TBH we should be able to automate/generate these if we want.
Here's a description of what each benchmark method targets:
benchCreate - Generates using a consistent set of IDs of growing size
benchCreateRand - Generates using a rand set of IDs w/ growing max value
benchCreateAndDecode - Expands on benchCreate, same idea but also decodes the ID and asserts equality.
Initial results
Note: Manually sorted for better comparisons...
Mac mini M1 2020 / PHP version 7.4.29, xdebug ✔, opcache ✔
TL;DR
Attempting to ensure test runs are consistent between generators & easier to grok in reports
Summary
Alright, this is another slightly large change-set, but I think it's worth it for more granular test results. Also as a preface, the
AlwaysNew
subset of tests are likely just temporary as they feel like good baselines for the discussion. I think it's good to see what effects those ones show PHP's native file/json functions have on the results compared to the libraries logic. So that said, I think we could remove those benchmarks once we are in a more stable period.Per my comment about how the random numbers were used, I think that was likely making things "too random" between revisions/iterations. As such I've also included a new ID provider that provides a consistent set of IDs every time. This ensures that we can actually compare apples to apples against generators. Since now we have a benchmark for every generator we can say "when int 9 is passed here's the performance".
Benchmark layout
Each benchmark class instance consists of a
setUp
/tearDown
and then the same basic tests. This makes it easy to setup new tests for different style generators just by editing thesetUp
. TBH we should be able to automate/generate these if we want.Here's a description of what each benchmark method targets:
benchCreate
- Generates using a consistent set of IDs of growing sizebenchCreateRand
- Generates using a rand set of IDs w/ growing max valuebenchCreateAndDecode
- Expands onbenchCreate
, same idea but also decodes the ID and asserts equality.Initial results
Mac mini M1 2020 / PHP version 7.4.29, xdebug ✔, opcache ✔