Closed dtracers closed 9 years ago
I've read this twice, but I'm still not sure what problem you're trying to solve, and what your solution is. Could you put together some sketch code to outline the issue? Or point at some existing test runner code to show what part you want to standardize?
Sure! I actually have been thinking about this and it may be solved by other parts of the reporting framework?
So for my particular example I will use QUnit because that is what I am most familiar with, but I am sure it applies to others.
So I have been trying to use QUnit with a webdriver (selenium) to automate browser testing. currently no reporting exists with the webdriver so there is no way to know if tests are working on not and as a result I have been trying to make a reporting that will signal when certain events happen (test get finished, test failed) the same type of reporting stuff that benefits from this project! :+1:
But when you use QUnit as a browser script (because I need to use DOM and other normal browser side plugins) I end up with a problem. (And every browser side only unit tester will have this problem)
More background Because some webdrivers use the actual browser applications they can not natively intercept events as they happen. Instead some have done a work around where they create a local node server and attempt to listen to every event and then relay that though the node server back to the webdriver.
[source] http://webdriver.io/guide/plugins/browserevent.html
The issue: The test run before this node listening interface gets setup. Making it impossible to listen for events and handle them like a good reporter should.
The solution: Have a standardized way for webdrivers (or test runners in general) to communicate to a test that it is ready to be a reporter and that the tests can start.
Notes: The biggest workflow that is impacted are unit test that are required to be run in a browser environment.
Disadvantage: A way to figure out when to run the test by default in a browser a user is viewing and when to wait for events to get setup in a browser in an automatic environment.
Hopefully this makes more sense and explains the use case of this.
Though I am sure other reporters would probably like the time to setup before the tester runs (like a code coverage reporter that has to replace the javascript to do line coverage)
The way I am thinking of this working is when registering a reporter it can optionally suggest that the tester holds off until it is ready (which can obviously be ignored with a timeout or something).
EX:
<script src=testFrameworkX.js>
(code inside that script)
// called to determine if my framework is ready to start running tests
function readyToRun() {
for (readyToRun in delayList) {
if (!readyToRun()) {
return false;
}
}
// other local tests from the framework
return true;
}
function delayRunning(func) {
delayList.add(func);
}
</script>
<script src=reporter1.js>
(code inside that script)
reporter1 = new reporter1();
testFrameworkX.registerReporter(reporter1);
testFrameworkX.delayRunning(reporter1.readyToRun.bind(reporter1));
</script>
So in this example the tests would not start running until reporter1 said it was ready to run.
Because maybe reporter1 has to do some complicated async thing that can not be handled by listening to an onstart event. (Like replace js code or setup communication with a node server)
Obviously the above example is really bad almost sudo code but I hope it makes the point clear.
I still can't tell how the issue is related to the goal of the js-reporter project. Webdriver integration certainly is a good problem to tackle, but very much out of scope here.
Have you looked at tools that provide webdriver integration out of the box, like Intern? The 3.x releases provide a QUnit interface, so you can use tests written for QUnit.
If you think this out of scope then that is fine and you can close this. :+1: But I think there are other reporters that could also benefit.
An example I thought of are code coverage reporters. Typically They have to take the existing javascript and then insert a marker between every line of code to make sure it is reached.
Obviously there are lots of edge cases and potentials for race conditions. If the test happen before the code that being covered is done being replaced then obviously that reporter is useless. Just my thoughts on how this could help potential reporters.
It sounds like what you are looking for is a testing solution that integrates all of this stuff for you like Intern. What it sounds like you are running into is simply the fact that not all test systems are designed well to do this sort of work; a shared reporters project like js-reporters is unlikely to be able to enforce the sorts of architectural changes that would be necessary to facilitate the sort of interactions you are envisioning, so IMO you are better off simply using a test system that fits your needs.
Thanks @csnover, I agree with that.
Hi I have been following this repo for a while because I am a big fan of unit testing javascript and the projects I work on are very unusual in terms of structure using javascript and I like good code quality overall.
I know the goal of this is to unify the reporters to make it easier for everyone which is awesome. But I was wondering if you guys put any thoughts into making a common interface for webdrivers or test runners?
so things like the grunt plugins or the selenium webdrivers or the CLIs which right now are very tailored to each specific testing framework.
And I am sure that if it was easier to hook in unit testers there would probably be more runners.
I know some work really well with webdrivers like mocha While others work really badly with webdrivers like qunit
And I know that right now 90% of my tests are browser side only tests and require dom so I am unique in that aspect. But as more complicated websites move to webapps these types of testing become more important. And with things like the webcomponents or polymer getting popular traditional behavior testing starts to get mixed in with the display.
I am willing to help with anything that needs to be done but here is my proposal:
A uniform interface for starting tests which basically have these three methods
very simple but I am sure there are other things that every testing framework shares that would be useful for the runner to send in.
Just an idea! let me know what you think