Closed Tims101 closed 5 years ago
+1.. I see the same issue with --retries
option set. In addition, it doesn't count test as failure even though theres an error (when --retries
option is NOT set).
Looks similar to #908?
@senocular, thanks for link.
I've just checked it without --suiteRetries
- have same issue as before.
TestSuite.prototype.runHookMethod
is lacking any error handling, and if something like an after
produces an error, it jacks everything up, including the reporter's ability to identify test failures.
I might try and take a look at a fix when I get the chance.
@Tims101 I have a fix for this, but when working on getting some tests in and increasing coverage for hooks, I've managed to open a pandora's box of hooks error handling. There are a few more failure points to consider. Its going to take a little while longer to wrap them all up.
@senocular, take your time, it's very good that you found the problem, cheers!
Results of what I've found. Most consistency with synchronous thrown errors in hooks, but kind of all over the place every where else.
timeouts:
global-before:
- exits (existing timers may prolong the process before exit)
- exit=0
global-beforeEach:
- hangs indefinitely
before:
- immediately runs next suite (no other suite hooks)
- reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
"done() callback timeout of ..." (red, no stack)
- SKIPPED lists all suite testcases
- global after()
- exit=passes ? 0 : 1 (timeout failure does not affect passes)
beforeEach:
- immediately runs next suite (no other suite hooks)
- reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
"done() callback timeout of ..." (red, no stack)
- SKIPPED lists remaining suite testcases
- global after()
- exit=passes ? 0 : 1 (timeout failure does not affect passes)
afterEach:
- global after()
- output
"Error: done() callback timeout of ..." (red)
" at ..." (white)
" at ..." (white)
- run ends (next suite does not run)
- exit=1
after:
- immediately runs next suite (no other suite hooks)
- reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
"done() callback timeout of ..." (red, no stack)
- global after()
- exit=passes ? 0 : 1 (timeout failure does not affect passes)
global-afterEach:
- hangs indefinitely
global-after:
- exits (existing timers may prolong the process before exit)
- exit=passes ? 0 : 1
---
sync throws:
global-before:
- output
"There was an error while starting the test runner:" (red)
"Error: <thrown message>" (gray)
" at ..." (gray)
" at ..." (gray)
- exit=2
global-beforeEach:
- global after()
- output
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
before:
- global after()
- output
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
beforeEach:
- global after()
- output
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
afterEach:
- global after()
- output
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
after:
- global after()
- output
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
global-afterEach:
- global after()
- output
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
global-after:
- output
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
---
async throws:
global-before:
- exits
- exit=0
global-beforeEach:
- global after()
- output:
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
before:
- global after()
- output:
"Error: <thrown message>" (red)
" at ..." (white)
" at ..." (white)
- exit=1
beforeEach:
- outputs
"✖ Error: <thrown message>" (red)
" at ..." (gray)
" at ..." (gray)
- beforeEach timeout handler
- runs next suite (no other suite hooks)
- reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
"done() callback timeout of ..." (red, no stack)
- SKIPPED lists remaining suite testcases
- global after()
- exit=passes ? 0 : 1 (timeout failure does not affect passes)
afterEach:
- outputs
"✖ Error: <thrown message>" (red)
" at ..." (gray)
" at ..." (gray)
- hangs indefinitely
after:
- outputs
"✖ Error: <thrown message>" (red)
" at ..." (gray)
" at ..." (gray)
- after timeout handler
- runs next suite
- reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
"done() callback timeout of ..." (red, no stack)
- global after()
- exit=passes ? 0 : 1 (timeout failure does not affect passes)
global-afterEach:
- outputs
"✖ Error: <thrown message>" (red)
" at ..." (gray)
" at ..." (gray)
- hangs indefinitely
global-after:
- output
"✖ Error: <thrown message>" (red)
" at ..." (gray)
" at ..." (gray)
- exit=0
---
async throws + done():
before test suites run:
- ignored
in test suite:
- outputs (at time of throw)
"✖ Error: <thrown message>" (red)
" at ..." (gray)
" at ..." (gray)
- test hooks continue for this test case (no matter when error occurs?)
- remaining test cases skipped
- after(), global afterEach()
- runs next suite
- reporter logs TEST FAILURE: 1 error during execution, 0 assertions failed... "✖ <suite-name>"
- SKIPPED lists remaining suite testcases of suite running during failure
- outputs
"Error while running [<testsuite> / <testcase>]:" (red)
"Error: <thrown message>" (gray)
" at ..." (gray)
" at ..." (gray)
- exit=1
after test suite:
- outputs (at time of throw)
"✖ Error: <thrown message>" (red)
" at ..." (gray)
" at ..." (gray)
- global after()
- exit=passes ? 0 : 1 (error does not affect passes)
... + other possible error cases depending on throw timing (i.e. done() & throw in same call stack)
@senocular end_session_on_fail Did you consider adding this flag into your mix. https://github.com/nightwatchjs/nightwatch/issues/704
@rkavalap: I'm not really paying attention to what's happening to the driver session at this point, though that will need to be considered later. Currently I'm mostly just trying to see if these errors are getting handled at all, and if so, then determine how disruptive it is when they are.
There are other things too, like if a suite before()
runs, but there's an error in something like a beforeEach()
, should the suite after()
then also get called to allow for any cleanup from the before()
? This is being done currently in a higher level up with the global after()
, seen largely with the sync throws results where after the error, where the global after()
is still getting called. Should that be carried down into suites? For a comparison, I'm pretty sure Mocha just bails and fails ignoring any additional tests/hooks when something like that happens.
I think what you see with async throws + done(): in test suite:
in the results above is the closest to what I would expect to happen for most of these. This is the same thing you get if a test case were to throw. The context would change in the global case, and even then I think its questionable when you're talking about global *Each()
s (are they considered part of the suite?).
Will likely need to coordinate with #1039
@senocular I already have a fix for #1039 but it needs more tests. I can push the branch, would be good to get another opinion. I think this will fix some of those cases and the rest should be treated separately. Thanks for your in-depth analysis.
But it looks like the issue here is not about how the errors inside hooks get handled, but rather how the errors inside child processes get picked up by the main process.
@beatfactor yeah get the fix for #1039 up there when you can. My original fix for the OP should have covered some of the other cases too, but not all - it was essentially a try catch in runHookMethod
. But then I noticed afterEach
was getting special cased and didn't go much further beyond that before testing. There's potentially more to check around done() + throws too, but I am passing those off as edge cases and ignoring for now.
Also, for my results above, I should mention that test_workers
were turned off and skip_testcases_on_fail
was at its default (true), but something I figured might have some influence on how things ultimately work here.
One thing I noted about the errors was the formatting. There was more variation than I expected which makes me wonder how many different places there are handling the errors and if there might be any crossover.
To summarize reporting variations:
a "custom error" being an error message that wasn't explicitly thrown, like for a timeout or something added in addition to a thrown error to give additional context
@senocular @beatfactor What branch are the changes pushed to ? I can keep an eye and test the changes.
@beatfactor is this the PR with the fix? Is there any way someone can help you? https://github.com/nightwatchjs/nightwatch/pull/1134
I got a similar situation today using --task
and --group
.
i just tried to use a group of tests to improve my testing time and now, when i use --task
the test does not run an return this message:
Starting selenium server in parallel mode... started - PID: 11144
If you want to upload Screenshots to S3
please set your AWS Environment Variables (see readme).
Started child process for: adquirentes\antecipacao
>> adquirentes\antecipacao finished.
and i don't know any other way to modularize my tests.
This issue has been automatically marked as stale because it has not had any recent activity. If possible, please retry using the latest Nightwatch version and update the issue with any relevant details. If no further activity occurs, it will be closed. Thank you for your contribution.
Hi there,
I have test
failed.js
that contains exception inafter
method.If I run it separately
nightwatch --test tests/failed.js
, I haveIt works properly.
But if I run all tests
nightwatch --suiteRetries 3
in parallel mode it doesn't write any information about that test and return success in result.nightwatch v0.9.5 node v3.8.6
It works on 0.8.18 in proper way - it shows proper errors in log and failed status in result. My nightwatch.json.
Cheers.
UPD: have same issue even without --suiteRetries flag.