Closed elliot-nelson closed 1 year ago
I'm definitely open to improving them if you have any suggestions! However, in my experience, mocha/chai does usually print out the diff/comparisons if I use things like to.eql()
instead of .include()
, but that means you need to know the exact items in the list. For this project, it's fairly reasonable to build a full list of expected diagnostics instead of checking for the presence of just one.
I created the expectDiagnostics
test helper to improve reporting for exactly this reason. Not all unit tests have migrated to this approach though, so it's possible those are the ones you're encountering. Here's an example of how to use it.
For all diagnostic testing, I would suggest you create your tests more like this:
expectDiagnostics(program, [
DiagnosticMessages.keywordMustBeDeclaredAtNamespaceLevel('namespace')
]);
That's fair, I'll try switching to that!
I guess part of me considers helpers like expectDiagnostics more of a patch than a solution because it doesn't address all the other random things you'll eventually want to rest -- a list of nodes, a list of class names, a list of function params, etc.
But you are dead on, that helper would solve my current problem. 😆
It looks like chai is truncating the list if it gets too long.
This prints the way you want (because the actual items are small).
This is the issue you're talking about (chai truncates because the items are larger).
So, to fix it, I was able configure chai to not truncate.
Would you like to open a PR to adjust that setting? I think you can set it at the top of src/testHelpers.spec.ts
, which gets imported just about everywhere.
What would you think about the pattern described in this stackoverflow answer?
Basically I'd make this change everywhere:
// before
import { expect } from 'chai';
// after
import { expect } from '../../chai-local';
And we could put any config options in this file.
(I suppose we could also wire chai through the test helpers file itself, or combine test helpers into this file, but testHelpers seems mostly like utilities that would survive even if you replaced mocha with jest or tap or something, so it seems like a separate concern.)
Yeah, I think that's a great option! Go with that.
ISSUE
As a developer looking to contribute to the codebase, when I run unit tests that have a failure, they often produce unhelpful / incomplete feedback.
For example:
SUGGESTION
This is just a developer experience thing, but I'm positive Mocha can produce better results than this... where's the diff output, the list of what was in the array versus what was expected in the Array, etc.? (This is especially jarring coming from Jest where that stuff is "just handled".)
I'm not sure whether the fix for this is in chai, mocha, an option we can turn on/off in package.json, or something else -- just leaving it as an issue we could fix in future.