-
```
A framework that is able to:
- execute benchmarks
- store the benchmark results or trend analysis
- store the benchmark results to do comparisons between benchmarks
- store the results from differ…
-
@jviotti points out that they put together some nice benchmarks as part of his JSON toolkit validator.
They live [here](https://github.com/sourcemeta-research/jsonschema-benchmark).
We should in…
-
Here we can discuss the benchmarks analysis.
It could be useful to compare with the performance of other non abstract implementations.
| Architecture | Comparison with other non ACP | Conclusion |
|…
-
**Is your feature request related to a problem? Please describe.**
Go 1.17 provides a new flag `-shuffle` which changes order of the tests to a random one.
But this flags is bothersome for benchmar…
-
Performance is an important strength of CuPy.
For better quality assurance, it is nice to have a way to "fixed-point observation" of performance.
By having a baseline set of benchmarks, we can:
*…
-
Hello,
I'm currently evaluating a distributed kv store solution and it has come down to either foundationdb or tikv as the only viable options.
I've seen some other technical comparisons [for ex…
-
It has been proposed to automate the running of benchmarks for improved tracking of performance changes over time. Currently, we have benchmarks (as seen in [burn-benches](https://github.com/burn-rs/b…
-
When comparing 2 runs that do not have any common benchmarks, it would be nice to notify the user that this comparison does not make sense. Currently what happens is that the table becomes empty, but …
-
We need a reproducible performance suite for various graph operations, features and implementations. Should also be able to take into account the difference between pure-python, Cython and PyPy.
Ex…
-
Hi!
Great work all! It's really surprising that the core of the library is very readable yet so extensible.
I'm curious if there are any performance benchmarks for comparison with more popular f…