Open megoth opened 5 years ago
@megoth In my experience that can be a sign of not cleaning up fully after each test. Which often comes from unnecessarliy reusing setup across tests that don't need it, usually just out of convenience or being the quickest to type.
For example, the test you mention is in a suite with a few other it()
s. And that suite uses before
/after
instead of beforeEach
/afterEach
. The latter is usually a better choice to guarantee test isolation unless you have a really good reason to share state across it()
s.
Having wrestled with complected tests far too long, I've taken to as much as possible just never even using setup/tearDown or any type of shared state between tests. At least ask "is it really needed?". Often times it's only there to 'be a little faster' (e.g. only create an object once per suite), but any savings of millisconds there is usually paid for by hours of debugging time later or sacrificed quality in the end product by not trusting the test suite.
So e.g. in distbin tests, almost every test creates a whole new instance of everything. Distbin also does I/O to the filesystem, but it creates a new os.tmpdir()
each time so don't need to remember to rm
anything that might come out of a test. examples
Better yet, unit test your Filesystem IO in one place (see last example) and everywhere else when testing the business logic itself, use an in-memory implementation of your storage interface that is garbage collected so you don't even depend on os/fs details when testing solid logic in a more general sense. In distbin I just use the Map
interface, and have a JSONFileMap
implementation that reads/writes objects as JSON files in a directory on filesystem. Then I realized AsyncMap
is probably more general for storage that is a network hop away, so made that too. Some may find that useful.
To be honest, it's probably impossible to stay exactly aligned with Map
interface as you want more things like globbing/permissions, etc, but the general idea of having an non-filesystem-backed implementations of whatever your business logic does I/O against is the real pattern I've found useful. Even without doing that, though, solid may benefit from at least creating a new os.tmpDir
and even instance of SolidServer
(or equivalent) for each it()
and not have shared state in a test suite without a very good (and commented) reason.
Finally, it can be helpful for developers to have this pattern encapsulated in something they can opaquely use in each it()
test without worrying about the specifics of setting up a tmpDir.
it('user1 should be able to access test directory',
withIsolatedServer(function ({ serverUri, ldp, ldpHttpsServer }, done) {
var options = createOptions(serverUri, '/origin/test-folder/', 'user1')
options.headers.origin = origin1
request.head(options, function (error, response, body) {
assert.equal(error, null)
assert.equal(response.statusCode, 200)
done()
})
}))
This also an example of using more pure functions and less module-level mutable state (e.g. ldp, ldpServer, timAccountUri, etc). This has the benefit too of allowing the tests to be run in parallel (with things like serverUri
varying and being held in closures) instead of requiring them to be run serially (because they all depend on the module-level variable serverUri
or ldpServer
).
Hope these links and tactics can be useful to consider when writing more tests or improving reliability of existing ones.
/ccing @dmitrizagidulin to this since so far I just at-mentioned @megoth who hasn't been in this file very long. @kjetilk and @rubensworks have also commited in there recently. Just food for thought.
@gobengo Thanks for this! I can use it in my own projects as well :-)
What helps me think about improving tests is trying other test runners (on small scale projects) and see what they do differently. My latest spin would be ava.
I agree with you that using pure functions + isolating side effects (I/O, Network) in a single place eases so many things. I learned some of those patterns by watching https://youtube.com/devmastery
But enough of advertising here :-)
@Ryuno-Ki Glad it was a bit interesting.
wrt test runners. Suprise suprise, I have an opinion on those too. In my opinion 'tests' should just be 'scripts' that have an exit code indicating success or failure (and, of course, stderr to see what failed). This is inspired by the Testing Guide for node.js itself.
In this view "Test runners" and "Test Frameworks" that magically insert globals like describe
, it
are unnecessary complexity.
Ideally, too, test files should not have weird side effects just because they are require()
d (or import
ed) by another script. i.e. this happens with jasmine/mocha defaults due to ReferenceError
s. Test files should not be some special class citizen. Just more .js
files like your source code.
This can be implemented by not calling functions at the module-scope level. Instead:
const assert = require('assert');
export const test = async () => {
// whatever you want to assert
}
if (require.main === module) {
test().catch(error => throw error)
}
With this you can treat this test file as both something that exports a test function and also a script that can be run to result in a 0/1 exit code.
Using a testing tool as a library (that you call) and not a framework (that calls you) has the benefit of making it easier to switch tools later or compose several in the same process.
Instead of having to figure out each test runner's way of only running tests in a certain file (e.g. with a flag, you just do ./tests/the-file.js
.
Sadly ava
does not facilitate these design goals.
bengo@LAPTOP-9VBP4TSS:~/tmp/testava$ cat test.js
#!/usr/bin/env node
const test = require('ava');
test('foo', t => {
t.pass();
});
test('bar', async t => {
const bar = Promise.resolve('bar');
t.is(await bar, 'bar');
});
bengo@LAPTOP-9VBP4TSS:~/tmp/testava$ ./test.js
Test files must be run with the AVA CLI:
$ ava test.js
With distbin (3 years ago), I ended up writing my own testing function in there. It works fine, but I wouldn't do that on a team or bigger project.
More recently I have enjoyed using alsatian. It's Typed, has easy support for async/promise tests, and exposes a library interface.
With this you can treat this test file as both something that exports a test function and also a script that can be run to result in a 0/1 exit code.
Reminds me on what a former SysAdmin told me. By relying on exit codes, you can pipe the testing into a monitoring solution (like Nagios or Icinga). But I couldn't find the docs, which would prove that.
I personally like mocha + chai the most. Ava looked a bit too stripped down at first. But its test execution is waaay faster.
But we are hijacking this thread. Have you blogged about opinion, so we could continue there?
I've noticed tests starting to fail irregularly, e.g. some tests at https://travis-ci.org/solid/node-solid-server fails some times; you just start another build, and then it "goes away". This is a probably a sign that some tests may bleed into others sometimes.
I'm creating this issue to try to document this behavior when I spot it later, and I invite others that notice this to contribute as well.