TechEmpower / FrameworkBenchmarks

Source for the TechEmpower Framework Benchmarks project
https://www.techempower.com/benchmarks/
Other
7.65k stars 1.95k forks source link

New benchmark idea (low from benchmark harness authors) #5506

Open lpereira opened 4 years ago

lpereira commented 4 years ago

Here's an idea for a new benchmark:

This should mimic pretty well a normal workload and provide a more "balanced" result.

There's no work to be done by any of the frameworks. (Bonus point: this should reduce some of the "smoke and mirrors" some frameworks perform to game each individual benchmark.)

NateBrady23 commented 4 years ago

Thinking about it now, there might be some work to be done by frameworks. Some frameworks separate tests such that plaintext and json might be in one configuration, while database tests might be in another. We'd want them to be in the same test. And then the framework would have to add the test key to those configurations so we'd know we can run the "all_tests" or whatever we'd call it if we did do this.

lpereira commented 4 years ago

Indeed. I ran this script:

#!/usr/bin/python

import os
import json

for root, _, files in os.walk('.'):
  for name in files:
    if name != 'benchmark_config.json':
      continue

    path = os.path.join(root, name)
    with open(path, "r") as f:
      j = json.load(f)
      for test_level0 in j.get('tests', []):
        for k, v in test_level0.items():
          if not 'json_url' in v: break
          if not 'plaintext_url' in v: break
          if not 'query_url' in v: break
          if not 'fortune_url' in v: break

          print(path)

And it found 144 frameworks that would be able to run such benchmark without any effort from their part.

So maybe call this "low-effort" instead of "no-effort"? :)