This repo is setup to test the performance of various test runners. Specially to:
hyperfine
via these instructions:yarn
Then you can run benchmarks via:
hyperfine --warmup 1 \
'yarn workspace bun test' \
'yarn workspace jasmine test' \
'yarn workspace jest test' \
'yarn workspace tape test' \
'yarn workspace vitest test --poolOptions.threads.isolate=false'
[!NOTE] These benchmarks are supported on MacOS and Linux. Windows is not supported at this time.
jasmine
: This is our baseline, using Jasmine and happy-dom.
bun
: Same test suite, but running using Bun.jest
: Same test suite, but running using Jest.tape
: Same test suite, but running using Tape and ts-node.
vitest
: Same test suite, but running using Vitest. NOTE: That benchmarks use --poolOptions.threads.isolate=false
as it has the best performance (see this comment)Benchmarks are run via GitHub Actions. You can check the latest run results here.
hyperfine
for consistent and reproducible benchmark collection--warmup 1
) to let various caches build upcreateSpy()
whereas Jest has jest.fn()
and Vitest has vi.fn()
)jest-dot
: It was suggested that using Jest's dot reporter might result in faster performance. In the past this benchmark repo had a jest-dot
suite to validate this but after many runs, it had nearly no impact on performance. The suite has since been removed.jest-goloveychuk
: GitHub user @goloveychuk suggested a solution which reduces Jest's memory usage. This solution was added and tested, but the performance impact was not any different.fastest-jest-runner
: Same as jest
but using fastest-jest-runner
. This solution was tested for several months but its performance in this benchmark was far worse than any of the others (including the baseline jest
). It was removed in 2023-02-25.jest-swc
: Same as jest
but using @swc/jest
instead of ts-jest
. It showed virtually no impact on performance. It was removed in 2023-05-22.