bluenote10 / SimpleLanguageBenchmarks

Benchmark collection of random problem + language combinations.
MIT License
1 stars 0 forks source link
benchmark benchmark-framework

Simple Language Benchmarks

Benchmark collection of random problem + language combinations.

Comes with a framework making it easy to contribute, run, and visualize new solutions. This website is auto-generated from the results.

Benchmarks page

Disclaimer: This is a fun project, without putting too much though into experiment design. Some results are obviously flawed.

Philosophy

About the Framework

The framework allows to quickly run a set of benchmarks and generates output to visualize the results. All HTML on this page is auto-generated.

Each benchmark problem is split into stages, i.e., the solution is computed in several steps and each step is measured individually (implementations are responsible themselves to measure the time for each step). The total runtime is obtained by adding up the runtimes of all stages.

In some cases splitting the solutions into several steps will feel slightly non-idiomatic and inefficient, but it comes with the benefit to disentangle for instance I/O from the computations. Moreover, it sometimes reveals interesting results like a language being particularly fast in a certain step while being slow in a different step.

Each benchmark is performed with three different problem sizes: Currently, each problem comes in a small, medium, and large variant.

Run Benchmarks

You can run all benchmarks for yourself or even create your own set of benchmarks. The main framework is written in Python, and should be reproducible on any UNIX based system.

TODO: extend documentation

Contribute

Contributions of any kind are highly welcome: GitHub Repository

In particular it would be nice to see (more) implementations e.g. for the languages: Nim, Rust, Go, Haskell, Closure, Kotlin, Julia, R, Crystal, Racket, Lua, Ruby, Java...

License

This project is licensed under the terms of the MIT license.