-
### Problem Statement
I want to understand how the go SDK's performance has changed over time.
### Proposed Solution
Use the newly available dedicated environments in the Otel Org to run benc…
-
**Is your feature request related to a problem? Please describe.**
The Nix evaluator is very performance-sensitive (getting in the hot path in a lot of use-cases), however it’s never tracked, leadi…
-
With the core examples being merged back into Mesa in #2358 and now available in [`examples`](https://github.com/projectmesa/mesa/tree/main/examples) we can now start working our checklist:
- [x]…
-
## Problem
Rather than having to run benchmarks ad hoc, It would be nice to see results of our benchmarks over time (particularly in CI) so we can more easily catch any regressions. https://bencher.d…
-
Hi,
I hope these new papers may be of interest to you.
**SURVEY:**
- Deep learning for dynamic graphs: models and benchmarks, accepted at IEEE Transactions on Neural Networks and Learning System…
-
As a ZITADEL contributor, I want my benchmark outputs to be collected, so that I can verify that my changes do not impact performance of ZITADEL.
Acceptance criteria:
- [ ] Benchmarks are execut…
-
As part of the CI infrastructure can you set up benchmarks with other runtimes and native executions in marjor languages which can be make available for users to track.
-
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
Full official NVIDIA Je…
-
It would be beneficial to have continuous tracking of our benchmark tests, because regressions (or unexpected improvements) otherwise go undetected (at least for a while). Afaict currently, the only b…