Closed shadowgate15 closed 3 years ago
Have a suspicion the flakiness comes partially from compilation step. There's a value testing against compiled files as that's what gets distributed. That said, think we might benefit from running the coverage against unprocessed source as that's the real and less prone to flakiness source of truth.
Have you had any successful experiments around this issue @shadowgate15 ?
I'm adding few tests to increase test coverage and hopefully make the build pass at lease on Node v10. It still puzzles me what's the main source of difference between test coverage in v10 vs v12
The master is :greencircle: again! Have added coverage and reworked how `logger = .cloneDeep(console)was used. The problem with cloning
console` is that it produces way to much junk output during the test run and was hiding the information about which test has caused an unhandled exception. Having explicit logger variable declarations also gives tests more readability imo.
Yeah I was planning on creating a new solution for the logger outputs whenever I did the refactor on the tests.
On Jan 4, 2021, at 9:22 PM, naz notifications@github.com wrote:
The master is 🟢 again! Have added coverage and reworked how logger = _.cloneDeep(console) was used. The problem with cloning console is that it produces way to much junk output during the test run and was hiding the information about which test has caused an unhandled exception. Having explicit logger variable declarations also gives tests more readability imo.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
@shadowgate15 is there a plan for this planned refactor somewhere? What is the goal that needs achieving through the refactor?
I’m in the middle of it, currently. The goal would be to reduce the flakiness, have it match your most recent refactor of the source code, and overall make the tests more readable for future changes.
On Jan 5, 2021, at 4:06 PM, naz notifications@github.com wrote:
@shadowgate15 is there a plan for this planned refactor somewhere? What is the goal that needs achieving through the refactor?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Sounds awesome. Having one giant test.js file was getting hard to reason about. Also would be good to clarify the cases when "serial" test cases should be used. My guess was they were made serial because of the flakiness?
The serial tests were used because of the use of @sinonjs/clock to simulate time passing in those tests. I’m hoping to either remove the need for those or come up with an alternative solution for it.
On Jan 5, 2021, at 4:13 PM, naz notifications@github.com wrote:
Sounds awesome. Having one giant test.js file was getting hard to reason about. Also would be good to clarify the cases when "serial" test cases should be used. My guess was they were made serial because of the flakiness?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
So the flaky tests were coming from job terminates after closeWorkerAfterMs
. Below is my attempt to fix but it is still messing up. any thoughts @naz @niftylettuce?
test.serial('job terminates after closeWorkerAfterMs', async (t) => {
t.plan(2);
const logger = {};
logger.info = () => {};
logger.error = () => {};
const clock = FakeTimers.install({ now: Date.now() });
const bree = new Bree({
root,
jobs: [{ name: 'long', closeWorkerAfterMs: 100 }]
// Logger
});
bree.run('long');
const start = clock.now;
t.true(typeof bree.closeWorkerAfterMs.long === 'object');
await delay(1);
bree.workers.long.on('exit', (code) => {
t.is(start + 100, clock.now);
t.is(code, 1);
});
clock.runAll();
clock.uninstall();
});
all good now
currently tests are so flaky that coverage will change from test run to test run.