it would also be great to update the results shown in the readme with uptodate results. Some of the libraries have been optimized since you ran the benchmark, changing the order. This is what I'm getting locally (not comparable to your bench as I don't have the Pux extension enabled):
Worst-case matching
This benchmark matches the last route and unknown route. It generates a randomly prefixed and suffixed route in an attempt to thwart any optimization. 1,000 routes each with 9 arguments.
This benchmark consists of 10 tests. Each test is executed 1,000 times, the results pruned, and then averaged. Values that fall outside of 3 standard deviations of the mean are discarded.
Test Name
Results
Time
+ Interval
Change
Symfony2 Dumped - unknown route (1000 routes)
998
0.0001446434
+0.0000000000
baseline
FastRoute - unknown route (1000 routes)
956
0.0001816755
+0.0000370321
26% slower
FastRoute - last route (1000 routes)
999
0.0001897163
+0.0000450729
31% slower
Symfony2 Dumped - last route (1000 routes)
988
0.0002044159
+0.0000597725
41% slower
Symfony2 - unknown route (1000 routes)
998
0.0007891961
+0.0006445527
446% slower
Pux PHP - unknown route (1000 routes)
998
0.0009405838
+0.0007959404
550% slower
Symfony2 - last route (1000 routes)
998
0.0010816638
+0.0009370204
648% slower
Pux PHP - last route (1000 routes)
998
0.0012993681
+0.0011547247
798% slower
Aura v2 - last route (1000 routes)
994
0.0456733020
+0.0455286586
31476% slower
Aura v2 - unknown route (1000 routes)
996
0.0495790971
+0.0494344537
34177% slower
First route matching
This benchmark tests how quickly each router can match the first route. 1,000 routes each with 9 arguments.
This benchmark consists of 5 tests. Each test is executed 1,000 times, the results pruned, and then averaged. Values that fall outside of 3 standard deviations of the mean are discarded.
it would also be great to update the results shown in the readme with uptodate results. Some of the libraries have been optimized since you ran the benchmark, changing the order. This is what I'm getting locally (not comparable to your bench as I don't have the Pux extension enabled):
Worst-case matching
This benchmark matches the last route and unknown route. It generates a randomly prefixed and suffixed route in an attempt to thwart any optimization. 1,000 routes each with 9 arguments.
This benchmark consists of 10 tests. Each test is executed 1,000 times, the results pruned, and then averaged. Values that fall outside of 3 standard deviations of the mean are discarded.
First route matching
This benchmark tests how quickly each router can match the first route. 1,000 routes each with 9 arguments.
This benchmark consists of 5 tests. Each test is executed 1,000 times, the results pruned, and then averaged. Values that fall outside of 3 standard deviations of the mean are discarded.