dawiddiwad / newman-async-runner

ASYNCHRONOUS NEWMAN TEST RUNNER
MIT License
3 stars 0 forks source link

newman option suppressExitCode is not honoured #17

Closed nabeenj closed 3 years ago

nabeenj commented 3 years ago

Hello,

I have been using newman-async-runner to run Postman collections. It's a good tool and works great for the best part.

The tests I have are run as part of a gitlab pipeline job. If a request from a collection fails, both newman and newman-async-runner return with an exit status 1. This fails the pipeline job of course and won't go on to any next job steps, but it also means the report is not stored away as an artifact to download at the end of the job's run.

I thought of using the newman option --suppress-exit-code (or suppressExitCode as in the API call to newman) set to true to try and suppress exit status 1 on test failure and allow the job to continue -

const runner_options = {
    folders: {
        collections: `${__dirname}/../collections/`,
        environments: `${__dirname}/../environments/${projenv}_test.environment.json`,
        reports: `${__dirname}/../reports/`
    },
    newmanOptions: {
        reporters: 'htmlfull',
        suppressExitCode: true
    }
}

That didn't do it unfortunately and the exit status is still 1.

When running the collection with newman and specifying the --suppress-exit-code option on the command line, that does work and the return code is 0, while it was 1 with the test failures before specifying the option.

newman run --suppress-exit-code collections/test_collection.json -e environments/dse_test.environment.json

Is this newman option handled by newman-async-runner?

Thanks, Nabeen

dawiddiwad commented 3 years ago

Hi, and very warm welcome! Wow, I did not expect anyone to use this runner, it was pushed to npm long time ago as a share to my colleague who needed to solve a specific problem - but now it seems there are hundreds of downloads every week šŸ˜‚

Now, answering your question:

Is this newman option handled by newman-async-runner?

Yes, this option is passed-trough in raw form straight to newman, so does any other option.

However, the NewmanRunner.runTests() method will always exit with code 0 - assuming it did not catastrophically fail, for example on some unhandled exception. This is the default exit code for the node process according to my knowledge. Here is an example with node process exit code logging. I even used the bail flag to immediately throw off newman run on any failed request:

const runner = require('newman-async-runner');

const runnerOptions = {
        folders: {
            collections:'./collections/',
            environments: './environments/'
        },
        newmanOptions: {
            timeoutRequest: 1,
            bail: true,
            reporters: 'cli'
        }
    };

new runner.NewmanRunner(runnerOptions).runTests();

process.on('exit', function(code) {
  console.log('About to exit with code:', code);
});

and the result is:

TOTAL ASYNC RUNS: 1

newman

test

ā†’ test
  POST https://test.whatever.com [errored]
     ETIMEDOUT

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
ā”‚                         ā”‚ executed ā”‚   failed ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚              iterations ā”‚        1 ā”‚        0 ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚                requests ā”‚        1 ā”‚        1 ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚            test-scripts ā”‚        0 ā”‚        0 ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚      prerequest-scripts ā”‚        1 ā”‚        0 ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚              assertions ā”‚        0 ā”‚        0 ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚ total run duration: 91ms                      ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚ total data received: 0B (approx)              ā”‚
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

  #  failure                       detail

 1.  Error                         ETIMEDOUT
                                   at request
                                   inside "test"
all test runs completed
About to exit with code: 0

So, how are you executing newman async runner in your script? It seems that it throws exit code 1 for some other reason than failed newman runs, maybe a script issue. Does the same script pass on your local machine?

nabeenj commented 3 years ago

Hi David,

It's a good bit of code, nice work on it! I had started writing up my own js to iterate through a glob of collections from a path and using the newman API to run each of them. I couldn't quite get it working - I think the the use of async.eachSeries to asynchronously start the newman runs and the callback of the newman.run method weren't playing well (or rather, I couldn't figure it out!) and my "custom" runner would just hang. That's when I came across this newman-async-runner and realised the hard work has already been done. :clap:

Back to the issue at hand, thank you for the above. I think that's put me on the right path and reminded me that I am using newman-async-runner with jest, and it's actually the jest assertion that is (rightly) returning the exit status 1. Sorry for the noise. :man_facepalming:

Example code:

const runner_options = {
    folders: {
        collections: `${__dirname}/../collections/`,
        environments: `${__dirname}/../environments/${projenv}_test.environment.json`,
        reports: `${__dirname}/../reports/`
    },
    newmanOptions: {
        reporters: 'htmlfull'
    }
}

describe('API tests', function() {
    jest.setTimeout(60000);

    it('smoke tests', async function() {
        for (const eachResult of await new runner(runner_options).runTests()) {
            expect(eachResult.summary.run.stats.assertions.failed).toBe(0);
        }
    })

    it('smoke tests 2', async function() {
        for (const eachResult of await new runner(runner_options).runTests()) {
            expect(eachResult.summary.run.stats.assertions.failed).toBe(0);
        }
    })

..

})

Result:

(node:13542) ExperimentalWarning: Conditional exports is an experimental feature. This feature could change at any time
  console.log
    TOTAL ASYNC RUNS: 1

      at NewmanRunner.setupCollections (node_modules/newman-async-runner/newman-async-runner.js:267:12)

 FAIL  src/smoke-collections-runner.test.js
  API tests
    āœ• smoke tests (2076 ms)
    ā—‹ skipped smoke tests 2
    ā—‹ skipped smoke tests 3
    ā—‹ skipped smoke tests 4
    ā—‹ skipped smoke tests 5

  ā— API tests ā€ŗ smoke tests

    expect(received).toBe(expected) // Object.is equality

    Expected: 0
    Received: 17

      71 |     it('smoke tests', async function() {
      72 |         for (const eachResult of await new runner(runner_options).runTests()) {
    > 73 |             expect(eachResult.summary.run.stats.assertions.failed).toBe(0);
         |                                                                    ^
      74 |         }
      75 |     })
      76 |

      at Object.<anonymous> (src/smoke-collections-runner.test.js:73:68)

Test Suites: 1 failed, 1 total
Tests:       1 failed, 4 skipped, 5 total
Snapshots:   0 total
Time:        4.029 s
Ran all test suites matching /src\/smoke-collections-runner.test.js/i with tests matching "smoke tests".
  console.log
    all test runs completed

      at NewmanRunner.runTests (node_modules/newman-async-runner/newman-async-runner.js:275:12)

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! jest-newman-tests@1.0.0 test:smoke: `jest -t "smoke tests" src/smoke-collections-runner.test.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the jest-newman-tests@1.0.0 test:smoke script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/nabeen.jamal/.npm/_logs/2021-10-28T03_56_41_632Z-debug.log

Running locally, I see the report is generated and I assume it is generated in the pipeline run too. It's just lost in the pipeline run due to this jest exit 1. If you know of any way to handle the exit code from jest, I'd be very grateful for any suggestion you can offer. And please assume this issue closed. :smiley:

Thanks again!

dawiddiwad commented 3 years ago

I have never used jest, so I have no idea if it can suppress codes on failures by some flag/option.

However, I would simply figure out the way of not losing your test report after the run fails with exit status 1 - it kind of defeats the purpose of running tests if you cant review failures in the report... simply find a way of handling exit codes of your pipeline build steps, a quick googling reveals some ideas on how people do it for gitlab ci.

You can always dirty-dance around your issue by brute-forcing your exit code in the node itself by adding an event listener to your file with jest tests definition and calling the node process.exit method with specific code:

process.on('exit', (code) => { if (code === 1) process.exit(0) });
dawiddiwad commented 3 years ago

closing as not a šŸ›

nabeenj commented 3 years ago

Just to close the loop on this, it was as simple as adding a when: always clause for the artifacts keyword for the job in the .gitlab-ci.yml file. For example -

  artifacts:
    paths:
      - test/api/reports/
    when: always

That stores the report and makes it available whether the API requests pass or fail. Thanks for pointing me in the right direction.