Closed mindplay-dk closed 2 years ago
Yes, we should print the test description somewhere there (above the operator ? )
Thanks for creating zora-reporters and supporting zora! I am using Zora in all my personal projects! Recently I tried diff-reporter but faced with same problem.
test('section', t => {
t.ok(2+2=4, 'check that 2+2=5');
t.ok(2*2=4, 'check that 2*2=4');
...
});
When one test from section with 'ok' tests is failed diff-reporter show only this section. So I found this temporary solution
test('section', t => {
t.equal(2+2, 4, 'check that 2+2=5');// <-- equal() instead of ok()
t.equal(2*2, 4, 'check that 2*2=4');
...
});
With this all is better.
FAIL constructors at: /code/projecttest/test.js:16:5
[equal] expected number to be 2 but got 1
TOTAL: 548
PASS: 547
FAIL: 1
SKIP: 0
But sometimes object is not serialisable
t.equal(1n, 2n, 'BigInt equal');
FAIL constructors at: /code/project/test/test.js:16:5
[equal] unsupported type bigint
TOTAL: 548
PASS: 547
FAIL: 1
SKIP: 0
Maybe I will find how to fix this.
I wanted to jump in and help with this, but I am once again unable to install the project.
$ npm run install:ci
> install:ci
> for f in $PWD/{.,assert,reporters,zora,pta}; do cd $f && npm ci; done;
sh: 1: cd: can't cd to /home/mindplay/workspace/zora/{.,assert,reporters,zora,pta}
Something wrong with this for-loop?
This time I do have the right dependencies installed:
$ node -v
v15.14.0
$ npm -v
7.7.6
(any reason not just to use --workspaces
for these loops?)
Oh wait, I do not have the right dependencies - nvm
switched me back to npm
version 7 🙄
Hm, no, it's the same deal with npm
versions 7 or 8. 🤔
arf, probably something different between the OS you use and mine.
(any reason not just to use --workspaces for these loops?)
you need to install at root level as well, so you could replace that line by:
npm ci && npm ci --workspaces --if-present
and it should do the trick
Or you can do it manually entering each workspace
Oddly, the clean
command does work - it uses the same syntax. (I'm on Ubuntu 20.04.3 LTS on WSL, if that helps)
How do you test this?
I'm running npm run dev
from the reporters
folder and it just passes no matter what do - if I put random letters in printDiagnostic
or wrong numbers in printSummary
, it just prints whatever I do to the screen and says it passed.
You must have some way of running a sample test? so you can review the colorized output after making changes?
print** are side effect functions which are not easy to test (and where tests are not really relevant).
You can split your implementation between a pure function which returns the formatted message (and which you should test) and the print function which just calls that function (no need to test).
You can see an example for the summary).
printDiagnostic
has a too wide scope to be tested corretly and we test at a lower unit of code (see diagnostic message and equal operator)
I'd be pretty interested in having the description in the diff reporter too, what's the current state of the PR, do you need help @mindplay-dk?
@tsnobip I haven't gotten back to this - I don't have the time and attention this requires right now, sorry.
@lorenzofox3 this area of the codebase doesn't seem immediately very contributor friendly? I would suggest using IOC for the print functions, but that might require a breaking change (?) and at least some refactoring work?
I would love to help but working through this crazy console output is more overhead than I can manage atm...
@lorenzofox3 this area of the codebase doesn't seem immediately very contributor friendly? I would suggest using IOC for the print functions, but that might require a breaking change (?) and at least some refactoring work?
You mean getting injected some sort of console abstraction like in the tap reporters ? 🤔
That is not really the problem here and the added value would be very low I think.
I think, the main issue in this area is the explosion of cases and the branching. That's why I believe splitting the code base between pure functions which return small part of the text stream, and the actual side effect (print), is a good strategy. Abstracting the "printer" would not solve the problem.
Obviously there are holes in the racket and this would require integration testing to be complete but that is lot of work to catch very few bugs/regressions.
Eventually we could consider a plugin based archi where the reporter is in charge of the branching and the plugins hook themselves to print/generate the message in a very specific case.
I would love to help but working through this crazy console output is more overhead than I can manage atm...
I'll do it then or @tsnobip if you wish to ?
to be honest I don't know if I'll have time for that right now, especially given we're questioning the use of zora for our use case at work (I think it's really great and would give anything to get rid of Jest machinery but we need snapshot testing unfortunately 😞)
Abstracting the "printer" would not solve the problem.
But there is more to abstracting this than just avoiding the messy console output, I think? it enables you to actually test for desired side-effects: does the reporter actually print the stuff you expect it to print?
I carefully describe most of my assertions like this:
With the diff reporter, these aren't currently displayed anywhere:
With tap-diff, my test output was like a written specification - I've kind of lost that.
I'm sure this must be an oversight and not by design? The assertion messages are part of zora's assertion model after all.
Perhaps a verbosity option might be appropriate as well? Again, with tap-diff, I was able to produce a full "written spec" from my test-suite, whereas the diff reporter works more like the "pessimist" type tap reporters. There are times when you want to see everything your test covers, when you want written confirmation that all your tests are being run, and so on.
(don't get me wrong, I'm not longing for tap-diff - it had plenty of problems, and the diff reporter is generally much more useful.)
And let me know if I can help with any of that? 🙂