Closed KatieWoe closed 1 year ago
See https://bayes.colorado.edu/continuous-testing/aqua/html/qunit-test.html?url=..%2F..%2Futterance-queue%2Futterance-queue-tests.html%3Fea%26brand%3Dphet-io and it looks like this is also causing scenery tests to not run.
PhetioObject.js:398 Uncaught TypeError: Cannot read property 'dataStream' of undefined
at UtteranceQueue.phetioStartEvent (PhetioObject.js:398)
at UtteranceQueue.stepQueue (UtteranceQueue.js:323)
at TinyEmitter.emit (TinyEmitter.js:69)
at Emitter.js:33
at Timer.execute (Action.js:225)
at Timer.emit (Emitter.js:58)
at UtteranceTests.js:40
Should we be running scenery/utterance-queue tests with brand=phet-io anyway?
Equivalent Scenery test that isn't completing: https://bayes.colorado.edu/continuous-testing/aqua/html/qunit-test.html?url=..%2F..%2Fscenery%2Fscenery-tests.html%3Fea%26brand%3Dphet-io
PhetioObject should be graceful, and not assume that phet.phetio.dataStream exists. I think that is proper behavior for the main super type in the project. Commit coming soon!
Committed above. I could go one of two ways for this issue. @samreid, this could also be solved by using qunitStart
in these tests. Much of me doesn't want to have to import all of phet-io into every unit test that uses PhetioObject.
Can you recommend if we should proceed with the above commit, or convert all test js files to use qunitStart (Or both)?
It seems like phet.phetio.dataStream
should exist if PHET_IO_ENABLED
is true, so perhaps we should use qunitStart
everywhere. That being said, I don't see any harm in also having the graceful test that you added. What is your sense for this issue?
I much prefer using qunitStart
then to add flexibility in all out code (which was recently converted to use globals for all phet-io modules to support es6 module dynamic loading). I converted all remaining test files above.
I still feel like there is a bigger issue here though, and that is why doesn't the CT harness appropriately log the error. @jonathanolson can we make that possible?
I still feel like there is a bigger issue here though, and that is why doesn't the CT harness appropriately log the error. @jonathanolson can we make that possible?
Would this be adding just a general on-error handler to the main body of the qunit pages? I'm not as familiar with that setup, or what is needed here.
Perhaps something like this to forward errors.
Index: chipper/js/sim-tests/qunit-connector.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/chipper/js/sim-tests/qunit-connector.js b/chipper/js/sim-tests/qunit-connector.js
--- a/chipper/js/sim-tests/qunit-connector.js (revision 91b434fb75cbc3942f6e286adaa59787913392ad)
+++ b/chipper/js/sim-tests/qunit-connector.js (date 1616706971485)
@@ -35,4 +35,20 @@
total: data.testCounts.total
} ), '*' );
} );
+
+ window.addEventListener( 'error', a => {
+ let message = '';
+ let stack = '';
+ if ( a && a.message ) {
+ message = a.message;
+ }
+ if ( a && a.error && a.error.stack ) {
+ stack = a.error.stack;
+ }
+ ( window.parent !== window ) && window.parent.postMessage( JSON.stringify( {
+ type: 'error',
+ message: message,
+ stack: stack
+ } ), '*' );
+ } );
} )();
\ No newline at end of file
Index: aqua/js/client/qunit-test.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/aqua/js/client/qunit-test.js b/aqua/js/client/qunit-test.js
--- a/aqua/js/client/qunit-test.js (revision 4e9d35d455f8a8b5e6bb4b7331abed310e3acacf)
+++ b/aqua/js/client/qunit-test.js (date 1616706971495)
@@ -28,12 +28,18 @@
let failed = 0;
let receivedDone = false;
let message = '';
+ let error = '';
const done = function() {
if ( id !== null ) {
message = `${iframe.src}\n${passed} out of ${passed + failed} tests passed. ${failed} failed.\n${message}`;
if ( !receivedDone ) {
- message += `\nDid not complete in ${options.duration}ms, may not have completed all tests`;
+ if ( error ) {
+ message += `\n${error}`;
+ }
+ else {
+ message += `\nDid not complete in ${options.duration}ms, may not have completed all tests`;
+ }
aqua.testFail( message );
}
else if ( passed > 0 && failed === 0 ) {
@@ -80,5 +86,10 @@
done();
}
}
+ else if ( data.type === 'error' ) {
+ clearTimeout( id );
+ error = data.message + data.stack;
+ done();
+ }
} );
} )();
\ No newline at end of file
That looks like a good approach to me.
Sounds good. I will keep my eye out on CT, and add an error to studio make sure that it fails on CT.
This did not work, reverting the problem, I'm a bit stumped on this
Presumably we'd want to run the test locally to see how it works, and what it would report or error out with?
Right! @jonathanolson, I'm going to add you to this, since you have a quarterly goal that generally is about CT bugs. I'll also try to work on it at some point too. I agree the next step seems to be to run some testing harnesses locally to determine the order of operations on error reporting.
Also, @jonathanolson, if you want to move this to aqua, feel free. We determined above that this is general (by reproducing it in studio/)
This is no longer reproducible, and was fixed by https://github.com/phetsims/aqua/issues/179. Also in the future we will have much more info about tests because of https://github.com/phetsims/aqua/issues/178. Closing
Some tests in CT are not being done.