Closed JamesMGreene closed 9 years ago
The original ticket mentioned browserstack-runner as a client of this interface. To make that a bit more specific, the goal here should be to reduce all of these into a single file that uses the interface shared by all testing frameworks: https://github.com/browserstack/browserstack-runner/tree/master/lib/_patch
Following that pattern, Karma shouldn't have to implement adapters for each framework, like the one for QUnit or the one for Mocha.
In both of these examples, a specific set of events with specific properties for each, with the option to add more events and more properties, is needed. The difficult part is to figure out what minimal set of events and properties need to be supported by each framework to claim compatibility.
Plus I really want Mocha's NyanReporter to work for QUnit. :wink:
@jzaefferer: Do you think it would make sense to include the Karma, BrowserStack, and SauceLabs folks on this effort?
Here's some random information and speculation:
So as I see it, there are potentially four types of reporters:
In the spirit of DRY, it might be a good idea to provide interface(s) or mixins or some other common set of utilities.
For example, console-type reporters will typically have a toggle for color, and shouldn't display colors anyway if STDOUT
is not a TTY. Browser reporters may benefit from HTML-related utilities including templating. Coverage reporters would probably want to listen to an event that reports coverage data. Since you could have an "HTML coverage" reporter, you shouldn't be locked down to a specific interface.
Regardless of what these reporters actually do, they all respond to one or more events. A quick scan of the reporters in Mocha shows that the following are listened to (more may be emitted):
start
- begintest
- test to begin; receives test
paramtest end
- test ends; receives test
parampending
- test is skipped; receives test
parampass
- test did not fail; receives test
paramfail
- test failed; receives test
and error
paramssuite
- test suite to begin; receives suite
paramsuite end
- test suite ends; receives suite
paramend
- doneWhat are the next steps here? Perhaps, let's gather the events emitted by QUnit, and other interested parties. I'd like some feedback from the Jasmine team, however, before going too far down the rabbit hole.
Having Jasmine on board would be a huge win.
@boneskull: Is it true that the test end
event would also contain a property like "status" that indicates its pass/fail/pending state? If so, it seems we could ignore Mocha's pass
, fail
, and pending
events as "convenience" events, right?
QUnit's core events:
begin
- run to begintestStart
- test to begin; receives a details
param (object literal containing various test data... to be discussed in the future).log
- assertion occurred; receives a details
param (with different data)testDone
- test ends; receives another details
param (with different data)moduleStart
- test suite to begin; receives another details
param (with different data)moduleDone
- test suite ends; receives another details
param (with different data)done
- run ends; receives a details
param (with lots of data)Two important notes about QUnit and how it currently differs from Mocha and Jasmine (AFAIK):
module
s (suites) have never been nestable, though I personally would like to change that: jquery/qunit#543Also, if it benefits anyone, here are comments where I did some initial comparisons of reporting events:
v1.x
, can't recallJasmine expects an object to be passed into addReporter
that responds to any subset of the following interface:
jasmineStarted
: called when the jasmine suite starts to run
args: Object { totalSpecsDefined }
suiteStarted
: called when each describe
starts to execute
args: Object { id, status, description, fullName }
specStarted
: called when each it
starts to execute
args: Object { id, description, fullName, failedExpectations, passedExpectations }
specDone
: called when each it
completes execution
args: Object { id, status, description, fullName, failedExpectations, passedExpectations }
suiteDone
: called when each describe
completes execution
args: Object { id, status, description, fullName }
jasmineDone
: called when the jasmine suite completes execution
args: none
OK, so the first three frameworks (Mocha, QUnit, and Jasmine) all definitely have a common subset of events that overlap.
I've filled in the initial details for @js-reporters/intern and @js-reporters/buster from their respective documentation here and here but I could be mistaken about their client-side absence of some event types.
Event Type | Mocha | QUnit | Jasmine | Intern | Buster.JS |
---|---|---|---|---|---|
Run Start | start |
begin |
jasmineStarted |
/suite/start where suite.name === "main" |
suite:start |
Suite Start | suite |
moduleStart |
suiteStarted |
/suite/start |
context:start |
Test Start | test |
testStart |
specStarted |
/test/start |
test:start |
Test End | test end |
testDone |
specDone |
/test/end |
test:success , test:failure , test:error , test:timeout |
Suite End | suite end |
moduleDone |
suiteDone |
/suite/end |
context:end |
Run End | end |
done |
jasmineDone |
/client/end , or /suite/end where suite.name === "main" |
suite:end |
UPDATE: Going forward, let's keep this issue's discussion focused on standardizing event types and, eventually, what data should be associated with each event.
For discussion regarding:
I can dig up a list of Buster's events and data. I think we have one of the more elaborate schemes for this. Where would you like me to post it?
@cjohansen: Use your discretion. Post them as a comment here, a gist, a documentation page, etc. Whatever you think will work best to convey the necessary info.
They're all here: http://docs.busterjs.org/en/latest/modules/buster-test/runner/#events
For brevity/context:
suite:start
(no argument)
Emitted once, as the runner starts running a test suite (a "suite" in Buster, is the full collection of tests to run)
suite:end
(results)
{
contexts: 0,
tests: 0,
errors: 0,
failures: 0,
assertions: 0,
timeouts: 0,
deferred: 0
}
Emitted once, when all tests are run
context:start
(context)
{ name: "Some stuff" }
Emitted every time a test context is entered. In Buster, a test context is something that contains tests. It can be test cases, or nested contexts. In Jasmine-terms, every describe
would be represented as a test context in Buster, and you can nest them (like you can in Jasmine and others).
context:end
(context)
Emitted every time a test context is completed.
context:unsupported
(unsupported)
{
context: context,
unsupported: ["label1", "label2"]
}
Emitted every time a context fails its requirements (when that happens, neither context:start or context:end are emitted).
test:setUp
(context)
Emitted once per test before the setup method(s) for a test is called.
test:start
(test)
{ name: "test name", deferred: false }
Emitted after running the test’s setup(s), but before the test itself runs.
test:async
(test)
Emitted when a test has been found to be asynchronous (usually means that the test function was called and has returned).
test:tearDown
(test)
Emitted once per test before the tear down method(s) for a test is called.
test:failure
(error)
{ name: "name of test", error: {name: "AssertionError", message: "", stack: "" } }
Emitted when the test throws (or otherwise flags) an AssertionFailure(). Only emitted once per test.
test:error
(error)
Emitted when the test throws any error that is not an AssertionFailure(). Only emitted once per test.
test:success
(test)
Emitted if the test passes.
test:timeout
(test)
Emitted if the test runner forcefully aborts the test. This happens when the test is asynchronous and does not resolve within the timeout configured by testRunnerOptions.timeout.
test:deferred
(test)
Emitted when a test is marked as deferred. The test is not run.
uncaughtException
(exception)
In addition to this, some of our reporters make use of events from the scheduling system, such as client:connect
and more.
@JamesMGreene Will get back to you on what data we're passing with test end
.
Here's a suggestion for "a minimum viable set of standardly-named events", based on comments in this issue: runStart
, suiteStart
, testStart
, testEnd
, suiteEnd
, runEnd
.
A few reasons for these:
It uses names that are valid JavaScript identifiers, which allows using those as keys in JSON and avoids the need to quote keys in regular JS objects or function calls.
I feel like the advantages of namespacing these events are greater than these disadvantages; thus: reporter.runStart
, reporter.suiteStart
, reporter.testStart
, reporter.testEnd
, reporter.suiteEnd
, reporter.runEnd
@JamesMGreene test end
passes a test
object, which has boolean properties failed
or passed
.
@JamesMGreene
test end
passes atest
object, which has boolean propertiesfailed
orpassed
.
@boneskull: What about pending
tests?
@JamesMGreene Yes, pending
as well
@boneskull: Can you explain your comment on namespacing the events further? I'm not following the need/benefit.
@JamesMGreene @jzaefferer
Namespacing events in an application is not generally necessary, because you have complete control. But we're not writing an application; we're defining a specification.
We don't care what the consumers of this specification do, nor can we make assumptions about their behavior. We should take care not to cause event name conflicts, as consumers (and applications under test!) may emit their own events.
Avoiding implementation headaches for others seems like a good deal at the expense of a few more keystrokes; I don't see a reason why any specification--especially one intended to be used in test--should reserve the global event namespace.
Our aim should be to be as unobtrusive as reasonably possible.
Namespacing seems like a separate issue to me, independent of the actual event names. So far I was assuming that the context of these events is enough to avoid any conflicts, since they're not emitted globally. Based on what I've seen so far, this applies to all the existing patterns used in the frameworks listed here.
Right, isn't the context of the emitter (e.g. MochaEventEmitter, QUnitEventEmitter, etc.) already reduced enough in scope?
@JamesMGreene I think I might need to see a bit of pseduocode to understand what the emitter context will look like. I'm by no stretch an expert user of EventEmitter
.
In addition to what I wrote earlier, I'd rather not tightly couple. I don't want to run into a situation where Mocha is emitting a foo
event for its reporter and then has to hack around the fact it listens for foo
itself in some other context. I'd much rather emit two events (which would offer more flexibility), but that's just me.
@boneskull: Read through Issue #3 for more discussion of EventEmitter stuff.
Would it be worthwhile to add a global "uncaughtException"/"unhandledError" event (see #4) as well, or do all frameworks currently handle those by dynamically adding a shell Test to report the failure as a failed "Test"?
@JamesMGreene In Mocha the uncaughts are tossed by the runner then handed to the reporter as a failure. To make sure I understand what we're talking about, the test code I was using was this:
describe('foo', function() {
it('should toss', function(done) {
setTimeout(function() {
throw new Error();
done();
}, 1000);
});
it('should not toss', function() {
require('assert')(true);
});
});
Resulting in:
foo
1) should toss
✓ should not toss
1 passing (1s)
1 failing
1) foo should toss:
Uncaught
Error
at null._onTimeout (/private/tmp/adhoc.spec.js:5:13)
at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)
I haven't seen anyone complain about this behavior, so I don't think it's worth it to differentiate the two.
@boneskull Not exactly... more like one of the following two scenarios:
A async error that occurred as a result of a non-async test
describe('foo', function() {
it('should toss... eventually', function() {
setTimeout(function() {
throw new Error();
}, 1000);
// But the test is already over....
});
});
An error that is truly unassociated with any test that just so happens to occur during the test run
setTimeout(function() {
throw new Error();
}, 1000);
describe('foo', function() {
it('should not toss... eventually', function(done) {
setTimeout(done, 1000);
});
});
@boneskull: In other words, outstanding global error scenarios that would need to be caught via window.onerror = fn1;
in the browser or process.on("uncaughtException", fn2);
in Node.
Intern 3.0 implemented a new interface for custom reporters. The event names from our draft match, but there's plenty additional events. Docs are here: https://theintern.github.io/intern/#custom-reporters
Discussing runEnd
event in #12. Includes discussion about data relevant for other events as well.
ok, I'm sitting down rewriting our reporter interface, so I'm wondering if I should use the draft or what. can we like, vote on it?
@boneskull we don't have a formal voting process, maybe we don't need one. So far there were no objections to the draft event names. Intern 3 is implementing the names already, you could go ahead and do the same. You're also welcome to review the proposal for event data in #12 - your feedback would help get that finished!
@boneskull any thoughts on the above? Are you going to use the proposed event names?
Seems fine to me, but I'll cc @mochajs/mocha
guess you can't do that.
cc @travisjeffery @a8m @danielstjules @dasilvacontin @jbnicolai
@boneskull thanks for the inclusion. I'll get back to this tomorrow when I've had time to have a closer look. Kudos on getting back to this, still think the /js-reporters/js-reporters project is awesome!
We on the QUnit team have been discussing the possibility of working with other JS test frameworks that can be run client-side (e.g. Mocha, Jasmine, Intern, Buster, etc.) to agree upon a standard Reporter interface so that we could hopefully share Reporter plugins between testing frameworks.
This would most likely come in the form of:
an EventEmitter interface (.on(...)
/.off(...)
) OR object with standard "hook" propertiesmaybe a standard-ish way to add a Reporter, e.g.MyLib.addReporter(x)
,MyLib.reporter = x;
, etc.a minimum viable set of standard test status types (e.g. pass, fail, skip, todo, pending, etc.)Would you guys be interested in discussing this further with us?
Cross-reference issues: