Open wyantb opened 8 years ago
@wyantb thanks for the detailed issue report. I agree that this is unintuitive at first. I think more thorough documentation is the correct answer here.
Allow me to explain, before
and after
are intended as "setup" hooks for the module. This means that they should really only be impacting the state of the module they are associated with, and storing such state on the testing context through this
. This would include things such as setting up an expensive mock that is to be the same for each test in the module or defining options that will be used in all tests. Global state modifications really should be made and reset after each test to ensure that tests are atomic.
@trentmwillis What about call counts? If before
for a given module is run multiple times as the test runner jumps around modules, that's not ideal either. Personally, I think we need to rethink ordering given before
/after
-- when reordering was conceived originally, there was only beforeEach
/afterEach
(known as setup/teardown then).
before
won't run multiple times, it is implemented to only ever run once based on looking at the meta information for the module it is associated with.
All your points here are valid, yes - my argument is simply based on what I feel would be more useful; before/after as they are now are definitely coherent. before/after implemented in such a way that global modifications are legal would, I think, be more useful for the use cases I describe, and less likely to give devs a shock. Generally I expect people to only be actively working and causing failures in a limited area, and not somehow be more interested in unit tests from unrelated modules. Randomizing order within/between modules would still meet the goal, without intermixing.
Brian Wyant
On Nov 12, 2016 3:34 PM, "Trent Willis" notifications@github.com wrote:
before won't run multiple times, it is implemented to only ever run once based on looking at the meta information for the module it is associated with.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/qunitjs/qunit/issues/1064#issuecomment-260146679, or mute the thread https://github.com/notifications/unsubscribe-auth/AAsXIj9b5O40z3QABStioreT-XbaEkhDks5q9iLLgaJpZM4KkQz- .
Re-classifying as docs issue for now. Projects that prefer to not have re-ordering, e.g. due to likelihood of global state being shared across modules, can use QUnit.config.reorder = false;
which is fully supported.
If the overall test suite is slow, one can reduce this through the module selector, general filter, etc. which means only the area you're currently working on will run each time.
Please bear with me a moment for an explanation - there's a good chance you won't agree with me initially, and I have a sliding scale of thoughts on what could be done with my experience.
To start, as demonstration, suppose I have a suite that acts like the following:
After running the tests in my browser, and refreshing once or twice, I see the following pattern; on one refresh, 1 failure, on my next refresh, 2 failures:
"Module interrupted" scenario - two failures
"Module uninterrupted" scenario - one failure
So, why do I consider this an issue?
I accidentally got myself into a situation that acted like the above yesterday. In practice, I was unit testing a jQuery widget, and giving the QUnit 2 - style API with
before
/after
a try for the first time. Historically, our tests using the jQ widget create a widget on#qunit-fixture
inbeforeEach
, and tear it down inafterEach
.This is a little inefficient, taking some 20-50milliseconds, so I wanted to try out a new way. I wanted to create the widget in
before
, reuse it for each test by performing much cheaperclearAll
-type methods inafterEach
, then finally tear it down inafter
.I was successful in this, but then as I was putting together a new test, before the test was good, it failed all the time. I was very confused when more than just this test failed, though. It was because other modules were jutting in, trying to create a widget on
#qunit-fixture
, actually getting the existing instance from thebefore
of my new module (sinceafter
hadn't run yet, since module B hadn't finished all its tests yet), and tearing down the grid that module B had created. Since module B only created the widget inbefore
, subsequent tests from module B always failed.What do I propose?
My guiding instinct here is that
before
/after
are not all that useful if there's no guarantee that a module will run all of its cases together; if they can get interrupted by unrelated tests, people can burn a few hours trying to figure out why. In my case, this was a jQuery widget, but I could see similar circumstances for sinon spy/stub injection on prototypes, dependency injection swaps, etc.So, I propose one of the following, in order of personal preference:
QUnit.config.reorder
is set to true, randomize the order that modules run in, and the order that tests run in within a module if desired, but do not reorder such that a module B's tests can have module A's tests run in between B'sbefore
andafter
hooksQUnit.config.reorder
is set to true, ensure that any module that definesbefore
/after
hooks in some fashion cannot get interrupted by tests from another module (but leave legacy behavior intact for modules w/obefore
/after
)before
,after
, andQUnit.config.reorder
Overall
I do like QUnit, I just wonder if folks other than me might run into this same situation. But anyway, thank you for reading this far!