Closed scrumit closed 7 years ago
I agree that benchmarks would be great to be included with implementation/tests, but I don't agree that it should be compulsory. In general, benchmarking is a very difficult enterprise with a lot of moving parts, and is likely to be repeated by the clients anyway (and possibly modified to the clients' needs). Furthermore, due to sensitivity of benchmarks to many factors, deviations from the organizations' results can confuse clients and even mislead them about the quality of implementation. In other words,
Benchmarking is a responsibility that should rest with the clients, and not the organizations providing API implementation.
@scrumit - what is the goal of this? do you want a benchmark?
I think that producing a high quality, comprehensive test suite is the essential goal. That is a difficult enough exercise itself.
For an API, testing conformance, implementability and compatibility are more important than performance. Unless the API requires a particular level of performance for a feature, I don't think adding timing results helps.
This has nothing to do with performance, apologies if you misunderstood me.
I've worked on interoperability testing of protocols only to find that loopholes were exploited. For examples, multiple command sequences sent back to back because there was no flow control or "command ready" indicator. I've seen protocols stuffed with binary data because the spec didn't exclude non-textual data. This laxity in protocol specifications is how we end up with buffer overruns that get exploited.
Note, I wrote "If an organisation provides a test suite" not test suites must be supplied. Just require a submitter shows a timed trace of the test working.
If an organisation provides a test suite it should also provide a result with timings showing at what rate and in what order the tests were executed.
This would allow other participants to repeat the test suite in the way the designer organisation planned it to be.