effect-handlers / effect-handlers-bench

Benchmark repository of polyglot effect handler examples
MIT License
20 stars 9 forks source link

Effect handlers benchmarks suite

The project aims to build a repository of systems that implement effect handlers, benchmarks implemented in those systems, and scripts to build the systems, run the benchmarks, and produce the results.

A system may either be a programming language that has native support for effect handlers, or a library that embeds effect handlers in another programming language.

Quick start

Ensure that Docker is installed on your system. Then,

$ make bench_ocaml

runs the OCaml benchmarks and produces benchmarks/ocaml/results.csv which contains the results of running the Multicore OCaml benchmarks.

System availability

System Availability
Eff Eff
Effekt Effekt
Handlers in Action Handlers in Action
Koka Koka
libhandler libhandler
libmpeff libmpeff
Links Links
Multicore OCaml Multicore OCaml

Benchmark availability

Eff Effekt Handlers in Action Koka Multicore OCaml
Countdown :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Fibonacci Recursive :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Product Early :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Iterator :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Nqueens :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Generator :heavy_check_mark: :x: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Tree explore :heavy_check_mark: :heavy_check_mark: :x: :heavy_check_mark: :heavy_check_mark:
Triples :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Parsing Dollars :heavy_check_mark: :heavy_check_mark: :x: :heavy_check_mark: :heavy_check_mark:
Resume Nontail :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Handler Sieve :heavy_check_mark: :x: :x: :heavy_check_mark: :heavy_check_mark:

Legend:

Directory structure

Contributing

Benchmarking chairs

The role of the benchmarking chairs is to curate the repository, monitor the quality of benchmarks, and to solicit new benchmarks and fixes to existing benchmarks. Each benchmarking chair serves two consecutive terms. Each term is 6 months.

The current co-chairs are

Past co-chairs

Benchmark

If you wish to implement <goat_benchmark> for system <awesome_system>,

Description

If you wish to add a new benchmark <goat_benchmark>,

System

If you wish to contribute a system <awesome_system>,

Ideally, you will also add benchmarks to go with the new system.

Having a dockerfile aids reproducibility and ensures that we can build the system from scratch natively on a machine if needed. The benchmarking chair will push the image to Docker Hub so that systems are easily available for wider use.

We use Ubuntu 22.04 as the base image for building the systems and hyperfine to run the benchmarks.

Artifacts

We curate software artifacts from papers related to effect handlers. If you wish to contribute your artifacts, then please place your artifact as-is under a suitable directory in artifacts/.

There is no review process for artifacts (other than that they must be related to work on effect handlers). Whilst we do not enforce any standards on artifacts, we do recommend that artifacts conform with the artifacts evaluation packaging guidelines used by various programming language conferences.