Closed joanhey closed 1 year ago
Actually I got the idea at first however there is not much diff between libs for minor versions and over time changes happened in the benchmarking environment for both hardware & software, it may happen again and again (sure updates on the os, changes in this repo, php version updates, etc.)
so comparing the results may not be useful, finally, I put a .gitignore in the /output/
If you got any idea to change the game, make sure to re-open this issue
It's useful for any developer, to help with performance regressions in his framework or lib.
The developer only want the information about his framework, and check performance regressions locally over time.
Of course that other thinks affect, but it's for use locally for the developer.
Example of another benchmark, with the affected problems in the OS. But in the middle a dev can check the regressions, at any time only running the bench.
I created this graph, with the data of Nginx without PHP:
A developer make n changes to his code.
And run the bench again locally, to check for performance regressions. Now he need to store 2 tables, later calculate the diff, .... This need to be automatic. And see it over time, if the performance of his code go in ascending or descending trend.
Bechmark as a tool.
With tests we check that our code run without errors. With the bench, we check that our code have not performance regressions.
Running locally every developer.
The same that a developer run: php-stan, csfixer, ... That they can run the bench at any time for his code.
https://github.com/myaaghubi/PHP-Frameworks-Bench/blob/e710ff7e84559d0f5ebdda2d46331059acdb2945/README.md?plain=1#L4-L12
yeah a developer can use PHP-Frameworks-Bench
as a tool to measure even a full PHP project, it is great, however the purpose of this repo is just for measuring the minimum bootstrap cost then we can provide more accurate/fair results for a comparison between libs/frameworks
The benchmarking process can result in slightly higher or lower numbers each time it is run due to its nature! it means we can't use it to determine small changes in terms of performance. A longer benchmark provides a better and more reliable score, however still the same problem exists even for a 24-hour benchmark
locally we already have results.hello_world.log
for a new benchmark it moves on results.hello_world.log.old
we can extend it to have multiple results....date...
.log and compare them on graph
I'll add more options/commands later 98e53de25305add254c0e9cf3997113837838855 ed3a6ccd01e6d98c5c7f7dbde30815d72607dc73 e5a4652e77c9eaff7c0791a1bfb6043eda07a496 changed to 15968c9df8cabad5230625594f3380d54df18e81
Like I always say, a benchmark is NOT a competition, but a very good tool to optimize the code.
If we create dirs with the datetime in the output, we can have later another graph with the performance over time.
We only need : for each dir in output, give me
results.hello_world.log
and display over time in a graph per framework.And we can delete any dir in output, at any moment. So a developer can measure the performance of the changed code over time. And check the changes that make slower the framework between that time frame. Also check visually the performance over different versions. Is it v 3.4.1 slower than 3.4.0 ?