ava just published its new version 0.15.0, which is not covered by your current version range.
If this pull request passes your tests you can publish your software with the latest version of ava β otherwise use this branch to work on adaptions and fixes.
In this release we've added some very useful features we think you'll like. There are no known breaking changes. We have worked hard on fixing bugs and improving performance. Going forward stability and performance will be our top priority as we're progressing towards a stable release. Finally, we're delighted to welcome @sotojuan to the team! π
Test file conventions
When you run AVA without any arguments, it tries to find your test files based on some conventions. The previous release had the following default patterns:
test.js test-*.js test/**/*.js
In this release we added additional default patterns based on community conventions:
A big part of open source maintenance is triaging issues. This can be a tedious task, involving a lot of back and forth with the person reporting the issue. Submitting a PR with a failing test makes the whole process much more efficient. It helps maintainers by providing a quality reproduction, allowing them to focus on code over triaging. Users benefit as their bugs get fixed faster.
To make it easier to submit failing tests, we're introducing a new test modifier: test.failing(). These tests are run just like normal ones, but they are expected to fail, and will not break your build when they do. If a test marked as failing actually passes, the build will break with a helpful message instructing you to remove the .failing modifier.
test.failing('demonstrate some bug', t=> {
t.fail(); // test will count as passed
});
This allows you to merge .failing tests before a fix is implemented without breaking CI. It is also a great way to recognize good bug reports with a commit credit, even if the reporter is unable to fix the problem.
Sometimes you want to run a series of very similar tests, each with different inputs and expected results. The traditional solution is to use test generator functions. However, this makes it difficult to perform static analysis on the tests, which is especially important for linting. In this release, we are introducing test macros as the official way to create reusable tests.
Test macros let you reuse test implementations. Additional arguments passed in the test declaration are forwarded to the macro:
Concurrency is awesome! It's what makes AVA so fast. Our default behavior is to spin up an isolated process for every test file immediately, and then run all your tests. However, for users with lots of test files, this was eating up too many system resources, causing memory and IO thrashing. We are experimenting with an option to let you limit how many tests files AVA runs at the same time. If you have a lot of test files, try running AVA with the --concurrency flag. For example, run $ ava --concurrency=5 and see if the performance improves. Please let us know how it works for you! We need feedback on the feature.
Note: This is an experimental feature and might change or be removed in the future.
Hello :wave:
:rocket::rocket::rocket:
ava just published its new version 0.15.0, which is not covered by your current version range.
If this pull request passes your tests you can publish your software with the latest version of ava β otherwise use this branch to work on adaptions and fixes.
Happy fixing and merging :palm_tree:
GitHub Release
In this release we've added some very useful features we think you'll like. There are no known breaking changes. We have worked hard on fixing bugs and improving performance. Going forward stability and performance will be our top priority as we're progressing towards a stable release. Finally, we're delighted to welcome @sotojuan to the team!π
Test file conventions
When you run AVA without any arguments, it tries to find your test files based on some conventions. The previous release had the following default patterns:
In this release we added additional default patterns based on community conventions:
This means AVA will now also run test files in
__tests__
directories and test files ending in.test.js
anywhere in your project.9ceeb11
Known failing tests
A big part of open source maintenance is triaging issues. This can be a tedious task, involving a lot of back and forth with the person reporting the issue. Submitting a PR with a failing test makes the whole process much more efficient. It helps maintainers by providing a quality reproduction, allowing them to focus on code over triaging. Users benefit as their bugs get fixed faster.
To make it easier to submit failing tests, we're introducing a new test modifier:
test.failing()
. These tests are run just like normal ones, but they are expected to fail, and will not break your build when they do. If a test marked as failing actually passes, the build will break with a helpful message instructing you to remove the.failing
modifier.This allows you to merge
.failing
tests before a fix is implemented without breaking CI. It is also a great way to recognize good bug reports with a commit credit, even if the reporter is unable to fix the problem.0410a03
Test macros
Sometimes you want to run a series of very similar tests, each with different inputs and expected results. The traditional solution is to use test generator functions. However, this makes it difficult to perform static analysis on the tests, which is especially important for linting. In this release, we are introducing test macros as the official way to create reusable tests.
Test macros let you reuse test implementations. Additional arguments passed in the test declaration are forwarded to the macro:
If you are generating lots of tests from a single macro, you may want to generate the test title programmatically:
a454128
Limited concurrency [EXPERIMENTAL]
Concurrency is awesome! It's what makes AVA so fast. Our default behavior is to spin up an isolated process for every test file immediately, and then run all your tests. However, for users with lots of test files, this was eating up too many system resources, causing memory and IO thrashing. We are experimenting with an option to let you limit how many tests files AVA runs at the same time. If you have a lot of test files, try running AVA with the
--concurrency
flag. For example, run$ ava --concurrency=5
and see if the performance improves. Please let us know how it works for you! We need feedback on the feature.Note: This is an experimental feature and might change or be removed in the future.
1c3c86c
Highlights
after.always
andafterEach.always
hooks. 61f0958t.deepEqual()
. 973624dt.throws
. 3201b1bt.throws
. 60bd8a5power-assert
output. 84c05fenpm link
usage on Node.js 6. a543b9fAll Changes
v0.14.0...v0.15.0
The new version differs by 92 commits .
c9c00d6
0.15.0
7486dcb
add test.failing and add cb to test.serial (#866)
77b55e5
Move images to media folder.
463dbd4
bump update-notifier. (#869)
bc11e08
Fix AppVeyor Badge (leave it pointing to Sindre's account).
6f83264
document always hook with fail fast flag (#858)
1d51231
fix warning message
a499523
Drop unused
silentoption from
lib/fork.js. (#864)
a543b9f
revert npm link fix. (#859)
0ef6dc4
Fix "Test files must be run with the AVA CLI" warning. (#862)
e01ee00
Pass shared caches to
globby. (#860)
3763109
Avoid running the same file multiple times (#856)
178175f
add output for failing tests (#837)
ea9c752
added missing todo definition for typescript (#846)
f35da7e
RunStatus#prefixTitle should remove this.base only if it occurs at the beginning of the path (#844)
There are 92 commits in total. See the full diff.
This pull request was created by greenkeeper.io. It keeps your software up to date, all the time.
Tired of seeing this sponsor message? Upgrade to the supporter plan! You'll also get your pull requests faster :zap: