Microbenchmarking tool for Elixir.
With Benchfella you can define small tests and it will intelligently run each individual one to obtain a more or less reliable estimate of the average run time for each test.
The key features of Benchfella:
If you are looking for a more elaborate treatment of the measurements, take a look at bmark which employs mathematical statistics to compare benchmarking results and determine their credibility.
Add :benchfella
as a dependency to your project:
# in your mix.exs
defp deps do
[
{:benchfella, "~> 0.3.0"}
]
end
This will make the new tasks available in the root directory of your Mix project.
Any project will do, so if you just want to measure a snippet of code quickly, create a bare-bones
Mix project with mix new
, create a subdirectory called bench
in it and put your benchmark
definitions there. See examples below.
Take a moment to study the output of running mix help bench
and mix help bench.cmp
inside your
Mix project to see all supported options.
In order to start writing tests, add a directory called bench
and put files with names that match
the pattern *_bench.exs
in it. Then run mix bench
in the root directory of your project.
Benchfella will then load each test and execute it for as many iterations as necessary so that the
total running time is at least the specified duration.
Example:
# bench/basic_bench.exs
defmodule BasicBench do
use Benchfella
@list Enum.to_list(1..1000)
bench "hello list" do
Enum.reverse @list
end
end
$ mix bench
Settings:
duration: 1.0 s
## BasicBench
[13:23:58] 0/1: hello list
Finished in 3.15 seconds
## BasicBench
hello list 500000 5.14 µs/op
In the end, the number of iterations and the average time of a single iteration are printed to the
standard output. Additionally, the output in machine format is written to a snapshot file in
bench/snapshots/
.
setup_all
and teardown_all
setup_all/0
lets you perform some code before the first test in a module is run.
It takes no arguments and should return {:ok, <context>}
where <context>
is
any term, it will be passed into before_each_bench/1
and teardown_all/1
if they are
defined. Returning any other value will raise an error and cause the whole
module to be skipped.
teardown_all/1
lets you do some cleanup after the last test in a module has
finished running. It takes the context returned from setup_all/0
(nil
by
default) as its argument.
# bench/sys_bench.exs
defmodule SysBench do
use Benchfella
setup_all do
depth = :erlang.system_flag(:backtrace_depth, 100)
{:ok, depth}
end
teardown_all depth do
:erlang.system_flag(:backtrace_depth, depth)
end
@list Enum.to_list(1..10000)
bench "list reverse" do
Enum.reverse(@list)
end
end
before_each_bench
and after_each_bench
before_each_bench/1
runs before each individual test is executed. It
takes the context returned from setup_all/0
and should return `{:ok,