json-ld / json-ld.org

JSON for Linked Data's documentation and playground site
https://json-ld.org/
Other
856 stars 152 forks source link

Add benchmark files #704

Closed davidlehn closed 4 years ago

davidlehn commented 4 years ago

Benchmarks previously discussed in the W3C JSON-LD WG: https://github.com/w3c/json-ld-api/pull/59

This patch adds benchmark data here for the community to work on rather than the JSON-LD WG.

The EARL and manifest additions are a bit ad-hoc. Comments welcome.

Comparison tool coming soon.

gkellogg commented 4 years ago

Perhaps add a “jld:BenchmarkTest” to each test entry as well.

Also, maybe we can update the site to reference them.

Not sure what the role of expected results is for benchmark tests.

davidlehn commented 4 years ago

Perhaps add a “jld:BenchmarkTest” to each test entry as well.

What would that be used for? I was going with the idea that the benchmarks would be regular tests. So any test can be run with the regular test runner to check valid operation. And any test can be run with the benchmark code. Are you suggesting that tag be used in the general test suite to mark interesting tests to benchmark?

Also, maybe we can update the site to reference them.

Yeah, this was just the initial bits. Will link to it from the main site when it's ready.

Not sure what the role of expected results is for benchmark tests.

You mean the "expected" properties? That's just so a regular test runner can verify the test is valid.

gkellogg commented 4 years ago

Perhaps add a “jld:BenchmarkTest” to each test entry as well.

What would that be used for? I was going with the idea that the benchmarks would be regular tests. So any test can be run with the regular test runner to check valid operation. And any test can be run with the benchmark code. Are you suggesting that tag be used in the general test suite to mark interesting tests to benchmark?

I use a generic test runner, which keys off of @type and option to do setup and evaluation. For a benchmark test, I would likely do some preflight, and run multiple times, to get a fair assessment of actual performance, and comparing with a result is not critical. Unless i build in special knowledge of specific manifest paths or names, I would have no way of knowing which tests to do this for. I don't see the need to create a completely separate test runner for benchmark tests.