Due to the larger restructuring, many of the benchmarks have become obsolete. In an effort to bring the benchmarking up to a good standard, we should implement some of the previous benchmarks again, and also implement some new ones.
[ ] Microbenchmarks for stress-testing the system (see #136, currently in development).
[ ] Any benchmarks / workloads mentioned in the Portals Onward'22 paper, to be included into the microbenchmarks.
[ ] The NEXMark benchmark (minimum Query1-4, as presented in the paper).
[ ] The SAVINA Actor Benchmark (for the Actor Library).
[ ] A benchmark for the PortalsJS runtime. This could be the Microbenchmark.
Optional: A custom benchmark for stateful serverless workloads.
Optional: The YCSB Benchmark.
Optional: A benchmark that runs across JS and JVM nodes (for hybrid cloud/edge executions).
Optional: Retwis
Optional: DeathStarBench
In addition, the benchmarks should also be tested and checked by the github actions workflows.
Due to the larger restructuring, many of the benchmarks have become obsolete. In an effort to bring the benchmarking up to a good standard, we should implement some of the previous benchmarks again, and also implement some new ones.
In addition, the benchmarks should also be tested and checked by the github actions workflows.
Notes: