metatests
is an extremely simple to use test framework and runner for Metarhia
technology stack built on the following principles:
Test cases are files, tests are either imperative (functions) or declarative (arrays and structures).
Assertions are done using the built-in Node.js assert
module. The framework
also provides additional testing facilities (like spies).
Tests can be run in parallel.
All tests are executed in isolated sandboxes. The framework allows to easily mock modules required by tests and provides ready-to-use mocks for timers and other core functionality.
Testing asynchronous operations must be supported.
Testing pure functions without asynchronous operations and state can be done without extra boilerplate code using DSL based on arrays.
mt.case(
'Test common.duration',
{ common },
{
// ...
'common.duration': [
['1d', 86400000],
['10h', 36000000],
['7m', 420000],
['13s', 13000],
['2d 43s', 172843000],
// ...
],
// ...
},
);
The framework must work in Node.js and browsers (using Webpack or any other module bundler that supports CommonJS modules and emulates Node.js globals).
caption
: <string>
case captionnamespace
: <Object>
namespace to use in this case testlist
: <Object>
hash of <Array>
, hash keys are
function and method names. <Array>
contains call parameters last
<Array>
item is an expected result (to compare) or
<Function>
(pass result to compare)runner
: <Runner>
runner for this case test, optional, default:
metatests.runner.instance
Create declarative test
options
: <Object>
stream
: <stream.Writable>
optionaltest
: <Test>
error
: <Error>
Fail test with error
test
: <Test>
Record test
caption
: <string>
name of the benchmarkcount
: <number>
amount of times ro run each functioncases
: <Array>
functions to checkMicrobenchmark each passed function and compare results.
cases
: <Array>
cases to test, each case contains
fn
: <Function>
function to check, will be called with each
args providedname
: <string>
case name, function.name by defaultargCases
: <Array>
array of arguments to create runs with. When
omitted fn
will be run once without arguments. Total amount of runs will
be runs * argCases.length
.n
: <number>
number of times to run the test, defaultCount from
options by defaultoptions
: <Object>
defaultCount
: <number>
number of times to run the function by
default, default: 1e6runs
: <number>
number of times to run the case, default: 20preflight
: <number>
number of times to pre-run the case for
each set of arguments, default: 10preflightCount
: <number>
number of times to run the function
in the preflight stage, default: 1e4listener
: <Object>
appropriate function will be called to
report events, optionalpreflight
: <Function>
called when preflight is starting,
optional
run
: <Function>
called when run is starting, optional
cycle
: <Function>
called when run is done, optional
done
: <Function>
called when all runs for given
configurations are done, optional
finish
: <Function>
called when measuring is finished,
optional
results
: <Array>
all case resultsReturns: <Array>
results of all cases as objects of structure
name
: <string>
case nameargs
: <Array>
arguments for this runcount
: <number>
number of times case was runtime
: <number>
time in nanoseconds it took to make count
runsresult
: <any>
result of one of the runsMicrobenchmark each passed configuration multiple times
results
: <Array>
all results from measure
runReturns: <string>
valid CSV representation of the results
Convert metatests.measure result to csv.
func
: <Function>
subtest
: <ImperativeTest>
test instancecallback
: <Function>
<Promise>
|<void>
Set a function to run after each subtest.
The function must either return a promise or call a callback.
value
: <any>
value to checkmessage
: <string>
description of the check, optionalCheck if value is truthy.
value
: <any>
value to checkmessage
: <string>
description of the check, optionalCheck if value is falsy.
Fail this test and throw an error.
If both err
and message
are provided err.toString()
will be appended to
message
.
func
: <Function>
subtest
: <ImperativeTest>
test instancecallback
: <Function>
context
: <any>
context of the test. It will pe passed as a second
argument to test function and is available at test.context
<Promise>
|<void>
nothing or Promise
resolved
with contextSet a function to run before each subtest.
The function must either return a promise or call a callback.
Create a declarative case()
subtest of this test.
msg
: <string>
test.error messagecb
: <Function>
callback functionReturns: <Function>
function to pass to callback
Create error-first callback wrapper to perform automatic checks.
This will check for test.mustCall()
the callback and
{test.error()}
the first callback argument.
fail
: <string>
test.fail messagecb
: <Function>
callback function to call if there was no errorafterAllCb
: <Function>
function called after callback handlingReturns: <Function>
function to pass to callback
Create error-first callback wrapper to fail test if call fails.
This will check for test.mustCall()
the callback and if the
call errored will use test.fail()
and
test.end()
actual
: <any>
actual datasubObj
: <any>
expected propertiesmessage
: <string>
description of the check, optionalsort
: <boolean | Function>
if true or a sort function sort data
properties, default: falsecmp
: <Function>
test function, default: compare.strictEqual
actual
: <any>
expected
: <any>
<boolean>
true if actual is equal to expected, false
otherwiseCheck that actual contains all properties of subObj.
Properties will be compared with test function.
actual
: <any>
actual datasubObj
: <any>
expected propertiesmessage
: <string>
description of the check, optionalcmp
: <Function>
test function, default: compare.strictEqual
actual
: <any>
expected
: <any>
<boolean>
true if actual is equal to expected, false
otherwiseCheck greedily that actual contains all properties of subObj.
Similar to test.contains()
but will succeed if at least one
of the properties in actual match the one in subObj.
fn
: <Function>
function to call before the end of test. Can
return a promise that will defer the end of test.options
: <Object>
ignoreErrors
: <boolean>
ignore errors from fn function,
default: false
Defer a function call until the 'before' end of test.
fn
: <Function>
function to runmessage
: <string>
description of the check, optionalCheck that fn doesn't throw.
Finish the test.
This will fail if the test has unfinished subtests or plan is not complete.
Mark this test to call end after its subtests are done.
actual
: <any>
actual dataexpected
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected for non-strict equality.
err
: <any>
error to checkmessage
: <string>
description of the check, optionalFail if err is instance of Error.
message
: <string | Error>
failure message or error, optionalerr
: <Error>
error, optionalFail this test recording failure message.
This doesn't call test.end()
.
checkFn
: <Function>
condition function
val
: <any>
provided value<boolean>
true if condition is satisfied and false
otherwiseval
: <any>
value to check the condition againstmessage
: <string>
check message, optionalCheck whether val
satisfies custom checkFn
condition.
val
: <any>
value to checkmessage
: <string>
check message, optionalCheck if val
satisfies Array.isArray
.
val
: <any>
value to checkmessage
: <string>
check message, optionalCheck if val
satisfies Buffer.isBuffer
.
actual
: <any>
actual error to compareexpected
: <any>
expected error, default: new Error()message
: <string>
description of the check, optionalCheck if actual is equal to expected error.
input
: <Promise | Function>
promise of function returning thenableerr
: <any>
value to be checked with test.isError()
against rejected valueCheck that input rejects.
input
: <Promise | Function>
promise of function returning thenableexpected
: <any>
if passed it will be checked with
test.strictSame()
against resolved valueVerify that input resolves.
fn
: <Function>
function to be checked, default: () => {}count
: <number>
amount of times fn must be called, default: 1name
: <string>
name of the function, default: 'anonymous'Returns: <Function>
function to check with, will forward all
arguments to fn, and result from fn
Check that fn is called specified amount of times.
fn
: <Function>
function to not be checked, default: () => {}name
: <string>
name of the function, default: 'anonymous'Returns: <Function>
function to check with, will forward all
arguments to fn, and result from fn
Check that fn is not called.
actual
: <any>
actual dataexpected
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected for non-strict not-equality.
value
: <any>
value to checkmessage
: <string>
description of the check, optionalCheck if value is falsy.
obj1
: <any>
actual dataobj2
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected to not have the same topology.
value
: <any>
value to checkmessage
: <string>
description of the check, optionalCheck if value is truthy.
message
: <string>
message to recordRecord a passing assertion.
n
: <number>
amount of assertionsPlan this test to have exactly n assertions and end test after
this amount of assertions is reached.
Test whether input matches the provided RegExp.
input
: <Promise | Function>
promise of function returning thenableerr
: <any>
value to be checked with test.isError()
against rejected valueCheck that input rejects.
input
: <Promise | Function>
promise of function returning thenableexpected
: <any>
if passed it will be checked with
test.strictSame()
against resolved valueVerify that input resolves.
Start running the test.
actual
: <any>
actual dataexpected
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected for non-strict equality.
obj1
: <any>
actual dataobj2
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected to have same topology.
Useful for comparing objects with circular references for equality.
actual
: <any>
actual dataexpected
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected for strict equality.
actual
: <any>
actual dataexpected
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected for strict non-equality.
actual
: <any>
actual dataexpected
: <any>
expected datamessage
: <string>
description of the check, optionalCompare actual and expected for strict equality.
caption
: <string>
name of the testfunc
: <Function>
test function
test
: <ImperativeTest>
test instanceoptions
: <TestOptions>
run
: <boolean>
auto start test, default: trueasync
: <boolean>
if true do nothing, if false auto-end test
on nextTick after func
run, default: truetimeout
: <number>
time in milliseconds after which test is
considered timeouted.parallelSubtests
: <boolean>
if true subtests will be run in
parallel, otherwise subtests are run sequentially, default: falsedependentSubtests
: <boolean>
if true each subtest will be
executed sequentially in order of addition to the parent test
short-circuiting if any subtest fails, default: falseReturns: <ImperativeTest>
subtest instance
Create a subtest of this test.
If the subtest fails this test will fail as well.
Create an asynchronous subtest of this test.
Simple wrapper for test.test()
setting async
option to
true
.
Create a synchronous subtest of this test
Simple wrapper for test.test()
setting async
option to
false
.
fn
: <Function>
function to runexpected
: <any>
expected error, default: new Error()message
: <string>
description of the check, optionalCheck that fn throws expected error.
obj
: <any>
value to checktype
: <string | Function>
class or class name to checkmessage
: <string>
description of the check, optionalCheck if obj is of specified type.
caption
: <string>
name of the testfunc
: <Function>
test function
test
: <ImperativeTest>
test instanceoptions
: <TestOptions>
run
: <boolean>
auto start test, default: trueasync
: <boolean>
if true do nothing, if false auto-end test
on nextTick after func
run, default: truetimeout
: <number>
time in milliseconds after which test is
considered timeouted.parallelSubtests
: <boolean>
if true subtests will be run in
parallel, otherwise subtests are run sequentially, default: falsedependentSubtests
: <boolean>
if true each subtest will be
executed sequentially in order of addition to the parent test
short-circuiting if any subtest fails, default: falserunner
: <Runner>
runner instance to use to run this testReturns: <ImperativeTest>
test instance
Create a test case.
Create a synchronous test
Simple wrapper for test()
setting async
option to false
.
Create an asynchronous test
Simple wrapper for test()
setting async
option to true
.