Closed caibear closed 9 months ago
Thanks for the PR, I think this is very exciting for the benchmarks. I'll break down my thought process a bit:
With that in mind, here's what I propose:
With future work to add more input sizes to the benchmark data sets.
@caibear I would appreciate your feedback and thoughts. I understand this is probably a significant expansion of the intended scope, so I would of course help get this work done.
This pull request adds an interactive website. Given bandwidth and cpu limits, it calculates how many messages per second could be sent/received for different combinations of serialization crates and compression libraries.
See https://caibear.github.io/rust_serialization_benchmark/
For example, this is useful for calculating how many average concurrent players an mk48.io server can handle. Given inputs 1 TB/Mo and 0.01 cores, it returns 437 updates/s for bitcode. SInce mk48.io sends 10 updates/s per player, a server can handle 43.7 players. The second best is serde_bare + zstd which returns 387 updates/s aka 38.7 players.
The data is taken from a copy of the README.md embedded in the binary. Compression speeds are currently based on constants, ideally they would be measured during the benchmarks.
TODO
Add Cargo.lock?