qunitjs / qunit

🔮 An easy-to-use JavaScript unit testing framework.
https://qunitjs.com
MIT License
4.02k stars 783 forks source link

[Feature Request]: Option to run tests in parallel? #1777

Open NullVoxPopuli opened 2 weeks ago

NullVoxPopuli commented 2 weeks ago

I would be great if I could declare a module as parallel, like this:

import { module, test } from 'qunit';

module.parallel('my suite', function (hooks) {
  // all 3 of these tests run "at the same time"
  test('a', async function (assert) {
    await 0;
    assert.ok(true);
  });

  test('b', async function (assert) {
    await 0;
    assert.ok(true);
  });

  test('c', async function (assert) {
    await 0;
    assert.ok(true);
  });
});

these tests are overly simplified, but I imagine this could lead to implementation of:

// psuedo code
if (module.isParallel) {
  await Promise.all(module.tests.map(test => runTest(test)));
} else {
  // current behavior
  for (let test of module.tests) {
    await runTest(test);
  }
}

this could greatly improve the performance of async tests that may be I/O bound (and thus await-ing on external things to happen)


maybe this should be module.concurrent, since you can't actually have single-threaded parallelism.

to have parallelism, you'd need to split test-running in to workers -- which would be useful as well -- and maybe easier from a context isolation perspective?

Krinkle commented 2 weeks ago

That sounds great to me! I've seen ad-hoc ways to do this in a few places already. Most recently, I did one myself here in this repo to speed up the CLI tests.

https://github.com/qunitjs/qunit/blob/2e9e9a9f7de03a3d290b8bdb0ad6d5108dc662f5/test/cli/helpers/execute.js#L194

https://github.com/qunitjs/qunit/blob/2e9e9a9f7de03a3d290b8bdb0ad6d5108dc662f5/test/cli/cli-main.js#L13-L22

A few things that come to mind:

In terms of API, the natural choice is a submethod indeed, but we do have a lot of them now:

https://qunitjs.com/api/QUnit/module/

And more are coming in https://github.com/qunitjs/qunit/pull/1772.

For concurrency, perhaps we can use an option instead.

QUnit.module('example', { concurrency: boolean | number }, function ( hooks ) {
 QUnit.test();
 QUnit.test();
 QUnit.test();
});

Given that options duals as the seed for the test context object, this would potentially conflict with existing code. This is not unprecedented in that we already reserve before, beforeEach, afterEach, and after keys in the same object. But something to consider.

Another idea would be to add it to the hooks object. That avoids conflicts, and has the benefit of working within the simpler-looking pattern of only using the scope parameter. Passing three parameters feels more complex, with another thing to learn.

QUnit.module('example', function ( hooks ) {
  hooks.enableParallel();

  QUnit.test();
  QUnit.test();
  QUnit.test();
});

to have parallelism, you'd need to split test-running in to workers -- which would be useful as well -- and maybe easier from a context isolation perspective?

Yeah, that's something we can't really do in core, but something the CLI and runners like Testem could do. See previous discussion at "Split tests for parallelization". https://github.com/qunitjs/qunit/issues/947.

To do this well, without needing to have upfront awareness of the modules and tests, you'd want to deterministically shard and filter by moduleId. This has been requested in "Dynamically filter tests to run". https://github.com/qunitjs/qunit/issues/1183 as well.

Krinkle commented 2 weeks ago

Fixture reset. If we think parallel is likely to be useful in a browser, we may want to give each test its own fixture. That could involve providing an automatic DOM reference like "this.fixture" or "assert.getFixture()", or some other mechanism.

QUnit.config.current may also need some thought. There are still extension features for which this is the only documented way to access certain information. That's solvable. We could document assert.test as officially supported accessor, to make up for it.

Krinkle commented 2 weeks ago

If we limit this to the CLI, then a safer option would be to use sub processes indeed. A fairly straightforward approach could be to spread out by test file. This is more crude in that tests concentrated in one module wouldn't run in parallel. But this might be offset by the gain in true parallel main threads, and by having entire modules run concurrently.