nightwatchjs / nightwatch

Integrated end-to-end testing framework written in Node.js and using W3C Webdriver API. Developed at @browserstack
https://nightwatchjs.org
MIT License
11.79k stars 1.31k forks source link

Test passed successfully even if failed #1103

Closed Tims101 closed 5 years ago

Tims101 commented 8 years ago

Hi there,

I have test failed.js that contains exception in after method.


var assert = require('assert');

module.exports = {

  after: function(browser) {
    throw 1;
    browser.end();
  },

  'Failed test': function(browser) {
    assert(false);
  }
};

If I run it separately nightwatch --test tests/failed.js, I have

Starting selenium server... started - PID:  15027

[Failed] Test Suite
=======================

Running:  Failed test
 ✖ AssertionError: false == true - expected "true" but got: "false"
    at Object.module.exports.Failed test (...tests/tests/failed.js:11:5)
    at _combinedTickCallback (internal/process/next_tick.js:67:7)

FAILED:  1 assertions failed (7ms)

1

It works properly.

But if I run all tests nightwatch --suiteRetries 3 in parallel mode it doesn't write any information about that test and return success in result.

nightwatch --suiteRetries 3

Starting selenium server in parallel mode... started - PID:  15048

Started child process for: tests/one-test
Started child process for: tests/another-test
Started child process for: tests/failed 

  >> tests/failed finished.  
....
# information about other tests
# success in result

nightwatch v0.9.5 node v3.8.6

It works on 0.8.18 in proper way - it shows proper errors in log and failed status in result. My nightwatch.json.

Cheers.

UPD: have same issue even without --suiteRetries flag.

mdharamadas commented 8 years ago

+1.. I see the same issue with --retries option set. In addition, it doesn't count test as failure even though theres an error (when --retries option is NOT set).

senocular commented 8 years ago

Looks similar to #908?

Tims101 commented 8 years ago

@senocular, thanks for link.

I've just checked it without --suiteRetries - have same issue as before.

senocular commented 8 years ago

TestSuite.prototype.runHookMethod is lacking any error handling, and if something like an after produces an error, it jacks everything up, including the reporter's ability to identify test failures.

I might try and take a look at a fix when I get the chance.

senocular commented 8 years ago

@Tims101 I have a fix for this, but when working on getting some tests in and increasing coverage for hooks, I've managed to open a pandora's box of hooks error handling. There are a few more failure points to consider. Its going to take a little while longer to wrap them all up.

Tims101 commented 8 years ago

@senocular, take your time, it's very good that you found the problem, cheers!

senocular commented 8 years ago

Results of what I've found. Most consistency with synchronous thrown errors in hooks, but kind of all over the place every where else.

timeouts:
    global-before:
        - exits (existing timers may prolong the process before exit)
        - exit=0
    global-beforeEach:
        - hangs indefinitely
    before:
        - immediately runs next suite (no other suite hooks)
        - reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
            "done() callback timeout of ..." (red, no stack)
        - SKIPPED lists all suite testcases
        - global after()
        - exit=passes ? 0 : 1 (timeout failure does not affect passes)
    beforeEach:
        - immediately runs next suite (no other suite hooks)
        - reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
            "done() callback timeout of ..." (red, no stack)
        - SKIPPED lists remaining suite testcases
        - global after()
        - exit=passes ? 0 : 1 (timeout failure does not affect passes)
    afterEach:
        - global after()
        - output
            "Error: done() callback timeout of ..." (red)
            "    at ..." (white)
            "    at ..." (white)
        - run ends (next suite does not run)
        - exit=1
    after:
        - immediately runs next suite (no other suite hooks)
        - reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
            "done() callback timeout of ..." (red, no stack)
        - global after()
        - exit=passes ? 0 : 1 (timeout failure does not affect passes)
    global-afterEach:
        - hangs indefinitely
    global-after:
        - exits (existing timers may prolong the process before exit)
        - exit=passes ? 0 : 1

---

sync throws:
    global-before:
        - output
            "There was an error while starting the test runner:" (red)
            "Error: <thrown message>" (gray)
            "    at ..." (gray)
            "    at ..." (gray)
        - exit=2
    global-beforeEach:
        - global after()
        - output
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    before:
        - global after()
        - output
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    beforeEach:
        - global after()
        - output
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    afterEach:
        - global after()
        - output
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    after:
        - global after()
        - output
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    global-afterEach:
        - global after()
        - output
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    global-after:
        - output
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1

---

async throws:
    global-before:
        - exits
        - exit=0
    global-beforeEach:
        - global after()
        - output:
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    before:
        - global after()
        - output:
            "Error: <thrown message>" (red)
            "    at ..." (white)
            "    at ..." (white)
        - exit=1
    beforeEach:
        - outputs
            "✖ Error: <thrown message>" (red)
            "    at ..." (gray)
            "    at ..." (gray)
        - beforeEach timeout handler
        - runs next suite (no other suite hooks)
        - reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
            "done() callback timeout of ..." (red, no stack)
        - SKIPPED lists remaining suite testcases
        - global after()
        - exit=passes ? 0 : 1 (timeout failure does not affect passes)
    afterEach:
        - outputs
            "✖ Error: <thrown message>" (red)
            "    at ..." (gray)
            "    at ..." (gray)
        - hangs indefinitely
    after:
        - outputs
            "✖ Error: <thrown message>" (red)
            "    at ..." (gray)
            "    at ..." (gray)
        - after timeout handler
        - runs next suite
        - reporter logs TEST FAILURE: 0 assertions failed... "✖ <suite-name>" listed with error.message
            "done() callback timeout of ..." (red, no stack)
        - global after()
        - exit=passes ? 0 : 1 (timeout failure does not affect passes)
    global-afterEach:
        - outputs
            "✖ Error: <thrown message>" (red)
            "    at ..." (gray)
            "    at ..." (gray)
        - hangs indefinitely
    global-after:
        - output
            "✖ Error: <thrown message>" (red)
            "    at ..." (gray)
            "    at ..." (gray)
        - exit=0

---

async throws + done():
    before test suites run:
        - ignored
    in test suite:
        - outputs (at time of throw)
            "✖ Error: <thrown message>" (red)
            "    at ..." (gray)
            "    at ..." (gray)
        - test hooks continue for this test case (no matter when error occurs?)
        - remaining test cases skipped
        - after(), global afterEach()
        - runs next suite
        - reporter logs TEST FAILURE: 1 error during execution,  0 assertions failed... "✖ <suite-name>"
        - SKIPPED lists remaining suite testcases of suite running during failure
        - outputs
            "Error while running [<testsuite> / <testcase>]:" (red)
            "Error: <thrown message>" (gray)
            "    at ..." (gray)
            "    at ..." (gray)
        - exit=1
    after test suite:
        - outputs (at time of throw)
            "✖ Error: <thrown message>" (red)
            "    at ..." (gray)
            "    at ..." (gray)
        - global after()
        - exit=passes ? 0 : 1 (error does not affect passes)
    ... + other possible error cases depending on throw timing (i.e. done() & throw in same call stack)
rkavalap commented 8 years ago

@senocular end_session_on_fail Did you consider adding this flag into your mix. https://github.com/nightwatchjs/nightwatch/issues/704

senocular commented 8 years ago

@rkavalap: I'm not really paying attention to what's happening to the driver session at this point, though that will need to be considered later. Currently I'm mostly just trying to see if these errors are getting handled at all, and if so, then determine how disruptive it is when they are.

There are other things too, like if a suite before() runs, but there's an error in something like a beforeEach(), should the suite after() then also get called to allow for any cleanup from the before()? This is being done currently in a higher level up with the global after(), seen largely with the sync throws results where after the error, where the global after() is still getting called. Should that be carried down into suites? For a comparison, I'm pretty sure Mocha just bails and fails ignoring any additional tests/hooks when something like that happens.

I think what you see with async throws + done(): in test suite: in the results above is the closest to what I would expect to happen for most of these. This is the same thing you get if a test case were to throw. The context would change in the global case, and even then I think its questionable when you're talking about global *Each()s (are they considered part of the suite?).

senocular commented 8 years ago

Will likely need to coordinate with #1039

beatfactor commented 8 years ago

@senocular I already have a fix for #1039 but it needs more tests. I can push the branch, would be good to get another opinion. I think this will fix some of those cases and the rest should be treated separately. Thanks for your in-depth analysis.

But it looks like the issue here is not about how the errors inside hooks get handled, but rather how the errors inside child processes get picked up by the main process.

senocular commented 8 years ago

@beatfactor yeah get the fix for #1039 up there when you can. My original fix for the OP should have covered some of the other cases too, but not all - it was essentially a try catch in runHookMethod. But then I noticed afterEach was getting special cased and didn't go much further beyond that before testing. There's potentially more to check around done() + throws too, but I am passing those off as edge cases and ignoring for now.

Also, for my results above, I should mention that test_workers were turned off and skip_testcases_on_fail was at its default (true), but something I figured might have some influence on how things ultimately work here.

One thing I noted about the errors was the formatting. There was more variation than I expected which makes me wonder how many different places there are handling the errors and if there might be any crossover.

To summarize reporting variations:

a "custom error" being an error message that wasn't explicitly thrown, like for a timeout or something added in addition to a thrown error to give additional context

rkavalap commented 8 years ago

@senocular @beatfactor What branch are the changes pushed to ? I can keep an eye and test the changes.

yegorski commented 7 years ago

@beatfactor is this the PR with the fix? Is there any way someone can help you? https://github.com/nightwatchjs/nightwatch/pull/1134

ptilt commented 7 years ago

I got a similar situation today using --task and --group. i just tried to use a group of tests to improve my testing time and now, when i use --task the test does not run an return this message:

Starting selenium server in parallel mode... started - PID:  11144

If you want to upload Screenshots to S3
      please set your AWS Environment Variables (see readme).
Started child process for: adquirentes\antecipacao

  >> adquirentes\antecipacao finished.

and i don't know any other way to modularize my tests.

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had any recent activity. If possible, please retry using the latest Nightwatch version and update the issue with any relevant details. If no further activity occurs, it will be closed. Thank you for your contribution.