Closed erikerikson closed 6 years ago
For what it's worth, I don't think know of a safe way to run quibble
(and therefore load your deps) anywhere other than a beforeEach
or equivalent. If you load your dependencies at the top-level, it'd limit you to one test case in the suite (or, if using a process-splitting tool like ava, one test-case per module), because once you've loaded a subject with quibble'd dependencies, you're going to experience test pollution from case to case. Even if the author uses extreme discipline, a stubbing meant to apply to one test case will still be lingering around for the next test case, and it'd be very very difficult to avoid accidental coupling between test cases, if not outright bugs.
I'd sooner propose documentation that says "this requires you to load your dependencies separately before each test".
Speaking as someone who has encountered lots (and lots and lots) of hard-to-trace bugs in test suites specifically because of state being shared across tests, I strongly recommend moving your requires into beforeEach
.
I recognize that this contradicts the JS zeitgeist for how JS unit tests are written (with all the subjects-under-test and test helper imports at the top). However, the vast number of bugs encountered (caused or at least enabled) by this pattern leads me to believe it is harmful.
Mocha presently doesn't have an option for randomizing the tests in a test run. Though I frequently approximate a random run by running each test file directly. (Something like for t in tests/*.js; do mocha $t; done
) This accomplishes two things: it ensures that every test file is able to be run independently (has necessary requires, etc); and it also catches any latent tests that depend on other test files having already run. Of course, this doesn't catch any test pollution between tests in the same file. Nor does it catch all types of test pollution. But I'd have $0 if I had a dollar for every client test suite that passed this simple check without a single test failure. And the solution in most of those failures was to move the requires into beforeEach
, thus allowing any stubbings to be reset between each test.
Heh. Religion. :D Not without basis. You are correct about the risks. I appreciate the perspective and that you both took the time to share it.
Being purely pragmatic... we're moving from one test that succeeds always to a reasonable set of tests and coverage. At this point the risk of introducing a bug that wastes a few engineer's times while writing tests is much smaller that the risk of no tests that scales across a user base.
Here's what I actually did to encounter this, knocking out a dependency of the module under test. https://github.com/Nordstrom/serverless-artillery/blob/even-moar-tests/tests/lib/index.spec.js#L33
Used in these two spots: https://github.com/Nordstrom/serverless-artillery/blob/even-moar-tests/tests/lib/index.spec.js#L631 https://github.com/Nordstrom/serverless-artillery/blob/even-moar-tests/tests/lib/index.spec.js#L642
And reset shortly thereafter: https://github.com/Nordstrom/serverless-artillery/blob/even-moar-tests/tests/lib/index.spec.js#L650
Basically, I knock out the one method that dependency exports with a method that calls a variable that is assumed to be a function and it, in fact, is a function that returns a promise. The dependency is used by only one method in the module under test around which only a few tests exist.
The reason I wrote this issue up was that did experience test pollution, across test modules instead of across individual tests within a module. The reason I experienced that is that I followed two rules:
afterEach
.If all my dependencies were loaded in beforeEach
then this would've been great. But they weren't. I've definitely been flowing with the "zeitgeist" as it were with my attention elsewhere rather than refining my testing patterns. (Any advise on where to "get dumb" on the current accepted best patterns?)
So... you're right, there's a potential test pollution risk I've created but it's also not entire non-standard patterning. Because of this and because my use works, I believe some clarification could be helpful to users with any similarity of mental configuration. In some senses one must always be mindful of the scope of effect one's code has and any potential side effects that could result in unexpected behavior; this would apply equally if I was performing a few input variant tests under the same unit test with single dependency knock out. Bad pattern and totally valid in some circumstances, if also a generally poor pattern.
Anyway... feel free to close this out if you like, I won't be offended, but I think there at least some clarification that could save some subset of youruser population a little time.
Thanks for the additional thoughts, Erik. I agree this is confusing (you're not the first to hit this snag). Because calling require
anywhere but the top of a file is unusual, I've jut made it more explicit in the docs, including a bold statement to stick it into a before-each hook.
Docs updated here: https://github.com/testdouble/testdouble.js/blob/master/docs/7-replacing-dependencies.md#nodejs
Fixed by https://github.com/testdouble/testdouble.js/commit/f90754b3b6428ad9a2368e8035f1db6a737ea8d6
Sorry, I don't have the time to grok how to integrate this into the docs and PR changes.
Docs "bug"
This could be considered to apply to testdouble docs as well.
The quibble.reset call needs to made at the same scope where the quibble call is made. The docs declare "A reset() method that undoes everything, intended to be run afterEach test runs" which is the case if you load your dependencies in
beforeEach
methods but not if you load them at the module level.Clone https://github.com/erikerikson/quibble-test/tree/across-tests and run
npm test
for a reproduction outputing:If I make the changes at https://github.com/erikerikson/quibble-test/compare/across-tests...across-test-correctly then I get the expected result: