jestjs / jest

Delightful JavaScript Testing.
https://jestjs.io
MIT License
44.05k stars 6.44k forks source link

Allow asynchronously skipping tests #8604

Open wolever opened 5 years ago

wolever commented 5 years ago

🚀 Feature Proposal

Other testing frameworks allow tests to asynchronously decide whether they should skip themselves.

For example, in Mocha:

it('tests a remote service', async function() {
  if (!(await remoteService.isActive())) {
    return this.skip()
  }
  … test the remote service …
})

Currently, however, it is impossible to asynchronously decide whether a test should be skipped in Jest.

See also: discussion here: https://github.com/facebook/jest/issues/7245

Motivation

Some tests either depend on - or are explicitly testing - remote services which may or may not be available.

Without being able to programatically and asynchronously decide whether tests can be skipped, there are only three options for writing these sorts of tests:

  1. Decide that they will either always pass or always fail if the service is unavailable. In either case the result can be misleading (ie, because in many cases "failure" indicates "the service is wrong", not merely "the service is unavailable", and "passing" suggests that everything is okay, which is also not necessarily true).

  2. Keep them in a separate suite, one per remote service, which can be run with (for example), npm run test:service-a).

  3. Use a regular expression (or similar) to include / exclude these tests from a test run.

Example

A complete, real-world (but anonymized) example from a Mocha-based test suite:

describe('example.com', () => {
  beforeAll(async function() {
    try {
      await fetch('https://example.com/api/whoami')
    } catch (e) {
      return this.skip(`Skipping tests for example.com: ${e}`)
    }
  })

  it('returns the current user', async () => { … })
  it('does the other thing', async () => { … })
})

Pitch

This belongs in Jest core because:

  1. It's currently supported by Mocha: https://mochajs.org/ (search for this.skip)
  2. It's impossible to implement outside of core (see elaboration below)
  3. The related discussion on #7245 suggests that it's important to a number of people (see, ex, this comment, which as of this writing has 16 đź‘Ť : https://github.com/facebook/jest/issues/7245#issuecomment-432240159)

FAQ

Why can't you use an if-statement?

A common suggestion in #7245 is to use an if-statement (or similar) to skip tests:

describe('example.com', () => {
  const isActive = remoteService.isActive()
  if (isActive) it('returns the current user', async () => { … })
})

However, this will not work for asynchronous tests, as tests must be declared synchronously, but the "is a remote service active?" check is necessarily asynchronous.

Wouldn't it be better if the tests failed/succeeded/retried/did something else?

There are situations when this is true, but (as evidenced by discussion on #7245) there are also situations where "skip tests when a remote service is not available" is a reasonable business decision (ex: https://github.com/facebook/jest/issues/7245#issuecomment-474587219)

mattphillips commented 5 years ago

I struggle to see the value of this feature, you essentially want to be have support for a sort of pseudo false-positive state. When service A is unavailable then skip test B, what do you want to do with that information?

By skipping in a CI environment you will receive no notification of this state without explicitly going in and checking, what is the difference between just leaving the test as a pass when the service is unavailable?

lucasfcosta commented 5 years ago

My 2cents on this matter (quite an interesting thing to think about btw):

@mattphillips I guess there might be no practical difference but the semantic difference is huge. This matters if you want to use the information produced by Jest to determine what happened during the build.

A practical example: consider a test which depends on an external service. I push new code which would make that test fail and introduce a regression. When CI runs the checks on the code I’ve pushed, the service was not available but the test is marked as passed, even though it didn’t run. My code then fails in production but I have no way of determining whether it was because the test was flaky or if it was because it didn’t run.

The false positive state, as you have described, would be to have a test which is actually not running but being marked as passed.

Now, I can understand the appeal for this feature as it has a real world correlation, but one might argue that introducing it in would encourage — or at least acknowledge that it is common to write — tests that aren’t deterministic (I.e. tests which rely on something you do not have control over). On the other hand, there doesn’t seem to be any reasonable manner to implement this outside of the core and it might be a decision from Jest that it doesn’t want to be prescriptive over what it’s users should do. In the case described by @wolever, for example, it seems reasonable to have tests being asynchronously skipped.

To sum it up: I’d say this is a valid feature, even though it’s encouraged only for a minority of cases. Since in those cases there doesn’t seem to be a workaround which is “good enough” (or precise enough), I’d say it would be ok to have it in.

langpavel commented 5 years ago

What about some cascading? Like if precondition test will fail only one error will be reported with count of skipped tests?

If i wrote test for 3rd party service availability only this one fails and I can see real reason why other tests not run?

Feels much better than seeing hundreds of failing tests.

wolever commented 5 years ago

Thanks @lucasfcosta - that's a great summary of the problem.

I would also add that the "conditional skipping" can also be environmentally aware. For example, in my environment, there are certain tests which are skipped only if they are running in a development environment (ex, tests which require a certain amount of local setup), but required in a CI environment:

describe('some complicated service', () => {
  beforeAll(async () => {
    const shouldSkip = (
      process.env.NODE_ENV == 'development' &&
      await checkServiceAvailability()
    )
    if (shouldSkip)
      return this.skip(`Skipping tests because ComplicatedService is not running (hint: run it with ./complicated-service)`)
  })

  … rest of the tests …
})
ClaytonSmith commented 4 years ago

@mattphillips What would your solution be for platform specific tests?

It would be wildly inappropriate to falsely mark tests as passed or failed when it doesn't even make sense to run the test.

The questions we want our test frameworks to answer are: Can I run this test? Did the test pass? A boolean Pass/Fail cannot answer both of those question.

langpavel commented 4 years ago

jest can mark tests as skipped, I'm using this hack, but it only works synchronously:

const haveDb = !!process.env.DB_CONNECTION_STRING;
const testDb = haveDb ? test : test.skip;
GoMino commented 4 years ago

Any update on this ?

jfairley commented 4 years ago

To add to @langpavel's idea, you could have an asynchronous check in a custom testEnvironment class.

const { connect } = require("amqplib");
const { EventEmitter } = require("events");
const NodeEnvironment = require("jest-environment-node");
const { logger } = require("../src/logging");

class TestEnvironment extends NodeEnvironment {
  constructor(config) {
    super(config);
  }

  async setup() {
    await super.setup();

    // rabbit
    try {
      this.global.amqpConnection = await connect({
        username: "admin",
        password: "admin",
      });
      this.global.channel = await this.global.amqpConnection.createChannel();
      this.global.hasAmqp = true;
    } catch (err) {
      logger.warn("AMQP is not available. Skipping relevant tests.");
      this.global.hasAmqp = false;
    }
  }

  async teardown() {
    if (this.global.amqpConnection) {
      await this.global.amqpConnection.close();
    }
    await super.teardown();
  }
}

module.exports = TestEnvironment;
describe("example", () => {
  const hasAmqp: boolean = (global as any).hasAmqp;
  const channel: Channel = (global as any).channel;

  it = hasAmqp ? test : test.skip;

  it("should send/receive", async () => {
    // ...
  });
});
nstaromana commented 4 years ago

I want to add a use case for programmatically skipping tests: Our product runs on multiple platforms & uses the same Jest tests for each platform. Sometimes a platform implementation is not yet complete, the product will report this dynamically via API (but only after setup in beforeAll()). Our tests also use snapshot matching. What happens is, if a platform doesn't support a feature and the test bails out, Jest will complain about the unused snapshot, resulting in a test failure. Our current workaround is to use 'inline snapshots', but this is exceedingly cumbersome. A programmatic skip() API would allow our tests to skip when a feature is not available on a certain platform, and also avoid having the Jest engine complain about unused snapshots.

MrLoh commented 4 years ago

I created a proof of concept PR (#9944) that demonstrates how easy this is to implement in jest-circus. I think it is well worth adding this functionality to jest-circus as it allows to implement custom bailing behaviors (#6527) or stepped tests (#8387) in user land easily.

kara-ryli commented 4 years ago

I found this issue because I'm writing a series of integration tests; I'd like to write my tests so that they can run against any deployment of the API, but only run certain tests if that deployment has a feature flag enabled, e.g.:

Works

describe('feature available in some deployments but not others', () => {
  const enabled = getFeatureFlag();
  (enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
});

Does not

describe('feature available in some deployments but not others', async () => {
  const enabled = await getFeatureFlag();
  (enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
});

An async describe, async setupFilesAfterEnv or a jest.skip() method I can call in the context of a test or before method would all work. In the meantime this sort of workaround feels doable, but ugly, because it leaves the developer wondering whether the test was run or not.

describe('feature available in some deployments but not others', () => {
  it('runs a test that is only sometimes applicable', async () => {
    const enabled = await getFeatureFlag();
    if (!enabled) { return; }
  });
});
shanebdavis commented 3 years ago

I'd like to add another concrete use-case for the asynchronous, skip-from-within-a-test feature:

I've created a package for use with Jest and Mocha that lets you create a sequence of tests which are dependent on each other. For example:

let { auth, createPost, createComment, getComments, logOut } = require("../examples/TestApp");
let { chainedTest } = require("@art-suite/chained-test");

// NOTE: This file INTENTIONALLY fails in Mocha to demonstrate how failures are handled.

const aliceEmail = "alice@test.com";
const postBody = "The quick brown fox jumped over the lazy dog.";
const commentBody = "Brilliant!";

// The return-result of this first test will be passed as the second argument
// to all subsequent tests in the chain.
chainedTest("Alice's user story", () => auth(aliceEmail))

// In "then" tests, the test's return value is passed to the next test.
// skipped: if neither this nor any dependent tests are selected by test framework
.thenIt("lets Alice create a post", () =>
  createPost(postBody)
)

.softTapIt("softTapIt failures don't skip following tests", () => {
  throw new Error("fake-failure in softTapIt");
})

// "tap" tests: ignores the test's return value. Instead it passes lastTestValue through.
// skipped: if neither this nor any dependent tests are selected by test framework
.tapIt("lets Alice create a comment", (post, alice) =>
  createComment(post.id, commentBody)
)

.tapIt("tapIt or thenIt failures WILL skip remaining tests", () => {
  throw new Error("fake-failure in tapIt");
})

.thenIt("can get the created comment from the post", (post, alice) =>
  getComments(post.id)
)

// In "softTap" tests, the test's return value is ignored.
// Instead it passes lastTestValue through to the next test.
// skipped: if not selected by test framework
.softTapIt("should have only one comment by Alice", (comments, alice) => {
  expect(comments.length).toEqual(1);
  expect(comments[0].userId).toEqual(alice.id);
})

.tapIt("should be able to logOut", logOut)

If one of these tests fail, the rest cannot succeed. Logically, they need to be skipped. However, we don't want to report them as "passed" since they were never run. Likewise, we don't want to report them as "failed" since they didn't actually fail.

What we need is a 3rd option - report the test as "skipped" (or "pending" as Mocha does). Note that the first failure will be reported as a failure, so tests will properly fail and CI/CD will fail. However, without "skip" there is no correct way to report what happened in Jest. i.e. There is no correct way to report the rest of the tests were not run.

In Mocha this works beautifully:

âś“ Alice's user story
âś“ needs to create a post
1) softTapIt failures don't skip following tests
âś“ lets Alice create a comment
2) tapIt or thenIt failures WILL skip remaining tests
- can get the created comment from the post
- should have only one comment by Alice
- should be able to logOut

3 passing (11ms)
3 pending
2 failing

1) softTapIt failures don't skip following tests:
    Error: fake-failure in softTapIt
    at ChainedTest.test (test/api.js:23:11)

2) tapIt or thenIt failures WILL skip remaining tests:
    Error: fake-failure in tapIt
    at ChainedTest.test (test/api.js:33:11)

Thank you @wolever for championing this feature-request. It's important, and I hope it makes it into Jest soon. Cheers!

MrLoh commented 3 years ago

I think the chances of this ever being added to jest are extremely low. I've had a PR out for months now and we haven't even gotten any feedback on this from jest maintainers. If you need this I'd suggest you look for jest alternatives.

github-actions[bot] commented 2 years ago

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days.

liamdocherty10 commented 2 years ago

I believe this is still a desired feature. Commenting to keep it open.

saltman424 commented 1 year ago

Another thing that I want to add to the conversation here, particularly in relation to the related request of test.step (and hopefully test.optional or test.step.optional) #8387, is that a use case for being able to skip from within a test that I have encountered a lot is: handling more complicated versions of steps that need more ad hoc logic. For example:

const thingsToCreate = [ ... ]
const createdThings = []

test.each(thingsToCreate)('create %s', async (thing) => {
  await createThing(thing) // Might throw error
  createdThings.push(thing)
  ...
})

test('process one thing', async () => {
  if (createdThings.length < 1) {
    skip() // We can't run this test
  }
  await processOneThing(createdThings[0])
  ...
})

test('process two things', async () => {
  if (createdThings.length < 2) {
    skip() // We can't run this test
  }
  await processTwoThings(createdThings[0], createdThings[1])
  ...
})

test('process three things', async () => {
  if (createdThings.length < 3) {
    skip() // We can't run this test
  }
  await processThreeThings(createdThings[0], createdThings[1], createdThings[2])
  ...
})

And of course, this is just one example. There are plenty of cases where you might have a test that needs some combination of previous tests to have succeeded to varying degrees in order for that test to be relevant. While test.step would handle simple cases of this, it would be difficult, and in many cases, slow, to make it work in all the different variations that come up regularly.

Of course, as has already been discussed, you can always just pass or fail instead of skip, but failures clutter the results making it harder to see the actual problem that needs to be resolved, and passing gives the false impression of success.

AkoElvis commented 1 year ago

Hey, jest maintainers! Any updates or feedback?

meredian commented 1 year ago

@palmerj3 @mattphillips Any chance it can be progressed? Especially since there's POC MR, which kind of really works.

Personally also used that behaviour a lot in mocha - to match test with conditions (multiple complex tests, multiple versions of code to be tested, not every test is viable for every version)

gerardolima commented 1 year ago

This should be an useful feature, specially for using Jest on integration tests, but after I read the comments of Jest contributors, I'm not confident this will move forward. As Node recently is putting some effort in providing its own built-in testing framework (node:test), I most likely won't use nor recommend Jest for new backend projects.

albertodiazdorado commented 9 months ago

@gerardolima Jest has never been a good choice for backend projects. One could argue that Jest has always been the worst choice available for backend projects.

albertodiazdorado commented 9 months ago

Btw you can use the undocumented Jest feature pending if you want to skip tests based on run-time information. It is not documented though, so who knows what's happening with it in the future:

let a: string | undefined = undefined;
it("test1", async () => {
  a = await myRequest();
  expect(a).toEqual("myString");
});

it("test2", () => {
  if (typeof a !== "string") return pending("Skipping test");
  expect(a.length).toBe(55);
});

(I know that you don't write tests like this, but there are enough examples in this thread already)