hanabi1224 / Programming-Language-Benchmarks

Yet another implementation of computer language benchmarks game
https://programming-language-benchmarks.vercel.app/
MIT License
630 stars 133 forks source link
benchmark benchmarks benchmarks-game ci-job crystal csharp go golang java javascript kotlin kotlin-native nim rust typescript vlang vue

Programming Language Benchmarks

bench MIT License

Why Build This

The idea is to build an automated process for benchmark generation and publishing.

Comparable numbers

It currently use CI to generate benchmark results to guarantee all the numbers are generated from the same environment at nearly the same time. All benchmark tests are executed in a single CI job

Automatic publishing

Once a change is merged into main branch, the CI job will re-generate and publish the static website

Main Goals

Website

Build

To achieve better SEO, the published site is static and prerendered, powered by nuxt.js.

Host

The website is hosted on Vercel

Development

git clone https://github.com/hanabi1224/Programming-Language-Benchmarks.git

cd website
pnpm i
pnpm build
pnpm dev

Benchmarks

All benchmarks are defined in bench.yaml

Current benchmarks problems and their implementations are from The Computer Language Benchmarks Game ( Repo)

Local development

Prerequisites

net7

nodejs 14

pnpm

podman (or docker by changing docker_cmd: podman to docker_cmd: docker in bench/bench.yaml)

Build

The 1st step is to build source code from various of lanuages

cd bench
# To build a subset
dotnet run -p tool -- --task build --langs lisp go --problems nbody helloworld --force-rebuild
# To build all
dotnet run -p tool -- --task build

Test

The 2nd step is to test built binaries to ensure the correctness of their implementation

cd bench
# To test a subset
dotnet run -p tool -- --task test --langs lisp go --problems nbody helloworld
# To test all
dotnet run -p tool -- --task test

Bench

The 3rd step is to generate benchmarks

cd bench
# To bench a subset
dotnet run -p tool -- --task bench --langs lisp go --problems nbody helloworld
# To bench all
dotnet run -p tool -- --task bench

For usage

cd bench
dotnet run -p tool -- -h

BenchTool
  Main function

Usage:
  BenchTool [options]

Options:
  --config <config>              Path to benchmark config file [default: bench.yaml]
  --algorithm <algorithm>        Root path that contains all algorithm code [default: algorithm]
  --include <include>            Root path that contains all include project templates [default: include]
  --build-output <build-output>  Output folder of build step [default: build]
  --task <task>                  Benchmark task to run, valid values: build, test, bench [default: build]
  --force-pull-docker            A flag that indicates whether to force pull docker image even when it exists [default: False]
  --force-rebuild                A flag that indicates whether to force rebuild [default: False]
  --fail-fast                    A Flag that indicates whether to fail fast when error occurs [default: False]
  --build-pool                   A flag that indicates whether builds that can run in parallel [default: False]
  --verbose                      A Flag that indicates whether to print verbose infomation [default: False]
  --no-docker                    A Flag that forces disabling docker [default: False]
  --langs <langs>                Languages to incldue, e.g. --langs go csharp [default: ]
  --problems <problems>          Problems to incldue, e.g. --problems binarytrees nbody [default: ]
  --environments <environments>  OS environments to incldue, e.g. --environments linux windows [default: ]
  --version                      Show version information
  -?, -h, --help                 Show help and usage information

Referesh website

Lastly you can re-generate website with latest benchmark numbers

cd website
pnpm i
pnpm content
pnpm build
serve dist

TODOs

Intergrate test environment info into website

Intergrate build / test / benchmark infomation into website

...

How to contribute

TODO

Thanks

This is inspired by The Computer Language Benchmarks Game, thanks to the curator.

LICENSES

Code of problem implementation from The Computer Language Benchmarks Game is under their Revised BSD

Other code in this repo is under MIT.