mochajs / mocha

☕️ simple, flexible, fun javascript test framework for node.js & the browser
https://mochajs.org
MIT License
22.59k stars 3.01k forks source link

🚀 Feature: Report file name where a test fails #3200

Open twitchard opened 6 years ago

twitchard commented 6 years ago

Is it possible to have mocha report the filename containing a failed test? Could I write a reporter that did this? Or would I have to build support for it, and then write such a reporter? Not at all familiar w/ the mocha codebase yet, but I'd be willing to take a dive in and try out contributing this if the Mocha community would welcome it.

After a test fails, I frequently find myself grepping the title of the test to determine the filename so I can then run the test in isolation. Reporting the filename would save me a step.

Bamieh commented 6 years ago

which reporter are you using? AFAIK all mocha reporters call the epilogue when the run is done. In which the error stack is displayed

image

for example, the error above is happening in index.js line 3 column 11

twitchard commented 6 years ago

The error stack is not guaranteed to contain the filename of the failing test. If the test times out, or if it fails due to an asynchronously thrown exception, I often have to go hunting. In my application this ends up being about fifty percent of the time.

twitchard commented 6 years ago

An example:

$ cat test.js
describe('a mocha test that times out', function () {
    it ('does not display the filename containing the failing test', function (callback) {

    })
})
$ $(npm bin)/mocha test.js

  a mocha test that times out
    1) does not display the filename containing the failing test

  0 passing (2s)
  1 failing

  1) a mocha test that times out
       does not display the filename containing the failing test:
     Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.
Bamieh commented 6 years ago

Fair enough, this could be added in the epilogue in the base reporter. The epilogue is called by all mocha reporters to show the number of fails / passes, and where the failure happened.

dfberry commented 6 years ago

I would like to fix this bug but have an issue with the suggestion from @Bamieh. The epilogue only has access to the flat stats object.

{
  "suites": 1,
  "tests": 1,
  "passes": 0,
  "pending": 0,
  "failures": 1,
  "start": "2018-01-15T20:29:17.733Z",
  "end": "2018-01-15T20:30:24.430Z",
  "duration": 66697
}

The runnable.js has 2 points where this error might be output (line 239 and line 299). I could just add it to the end of the string at that point.

if (!err && self.duration > ms && self._enableTimeouts) {
      err = new Error('Timeout of ' + ms +
      'ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. ' + self.file);
    }

The result looks like:

 Error: Timeout of 200ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. C:\Users\dinab\repos\mocha\mocha-pr-xxx\test\reporters\dina.spec.js

How would adding it to the epilogue stats be helpful? I'm trying to understand as I don't know the interior of the library that well.

boneskull commented 6 years ago

We need to know exactly where this information is supposed to be printed by a reporter, and which reporters print it, or don't.

Let's put the brakes on rushing to change things before we agree what we expect Mocha to do.

boneskull commented 6 years ago

@twitchard

For timeouts:

Can this problem not be solved by making your suites and test names more unique? The spec reporter will print the timeout message directly after the suite and test name. Note that we can't generate a stack trace because the test doesn't actually fail on its own (among other reasons like #3119).

For async exceptions:

When an async exception is thrown, you should get a stack trace. Do you have an example of where this is not happening?

twitchard commented 6 years ago

@boneskull The stack trace works as you describe. However, the stack trace does not necessarily contain the filename containing the test.

Here is a minimal example.

//lib.js
function f (shouldThrow, callback) {
    if (shouldThrow) {
        return setTimeout(()=>{throw Error()})
    }
    return setTimeout(()=>callback())
}
exports.f = f

//passing_test.js
const f = require('./lib').f
describe('f', () => {
    describe('with true', () => {
        it('eventually passes', callback => {
            return f(true, callback)
        })
    })
})

//throwing_test.js
$ cat throwing_test.js
const f = require('./lib').f
describe('f', () => {
    describe('with false', () => {
        it('eventually passes', callback => {
            return f(false, callback)
        })
    })
})

In my ideal scenario, something in the output would mention throwing_test.js (actually, what would be really cool is mentioning the line and column number of the it block that defines the test -- then in principle when a test fails an editor plugin could take you right to that test)

Here is the actual output:

  f
    with true
      1) eventually passes

  f
    with false
      ✓ eventually passes

  1 passing (14ms)
  1 failing

  1) f
       with true
         eventually passes:
     Uncaught
  Error
      at Error (native)
      at Timeout.setTimeout (lib.js:3:38)

As you can see the origin of the stack trace is the library (which is often useful, just not as immediately useful as the location of the failing test)

boneskull commented 6 years ago

the example is a little confusing due to the naming of the tests

boneskull commented 6 years ago

anyway, I don't really see how it's possible to show the originating test without:

Of the above tools, I don't know to what extent they work in any given environment. It'd either need to work in both modern browsers and supported Node.js versions, or we'd have to abstract it. Any way you slice it, it won't be easy.

We'd be able to retain the filename, test and suite name(s) for errors like this, but I don't see how we can get an actual stack trace since we can't just create Error instances with impunity (the tests that don't fail will cause memory leaks, and the performance hit to fix that would be unacceptable).

Mind you: this is a capability I want Mocha to have... more discovery & requirements are necessary though.

dfberry commented 6 years ago

@boneskull I would like to add the test case but I'm not sure how to do that. I see that there isn't one in the tests so far but what would it look like. I'm still learning...

vkarpov15 commented 6 years ago

So this issue is about long stack traces rather than just displaying the filename the failing test is in? I think both would be useful and the latter would be much easier.

twitchard commented 6 years ago

Personally I have would have far less use for a stack trace than filename/line number.

boneskull commented 6 years ago

@vkarpov15 No, the stack trace is not going to be feasible. It would be one way of getting at the filename & line number. The line number is also not going to be feasible.

What is possible is:

This information would need to be retained via one of the several modules mentioned in this comment.

@dfberry This would involve many changes to very touchy code, FWIW. What has to happen is we need to basically "track" when a test (or code under test) adds an "async" task to the runtime's internal message queue. Examples:

What makes this potentially more problematic (depending on the solution we choose) is:

A couple ideas then:

  1. Gather the info we need, and use one of the modules to "store" that data. Let the user code do whatever it wants. There will be some API hook to register a handler if an exception is thrown; we can then use the stored data to provide more information on the failure.
  2. "Track" user code to see if it still has tasks in the event loop after the test has completed. So, instead of just blindly marching on to the next test, Mocha exits with an error. This would detect problems that currently arise due to use of --exit, in perhaps a more helpful manner.
boneskull commented 6 years ago

That being said, the groundwork for such a thing could open things up for Mocha to be extra helpful in tracking down problems... it's something I would want to investigate further.

boneskull commented 6 years ago

relevant: #3223

electrovir commented 2 years ago

I wrote a quick and dirty reporter plugin that seems to be working to accomplish this.

mocha-spec-reporter-with-file-names

output looks like this:

example-output

JoshuaKGoldberg commented 8 months ago

I've been bit by this before. It's quite annoying to run a large test suite containing dynamically generated test names with misleading call stacks, then have to manually figure out where the tests are coming from.

https://www.npmjs.com/package/mocha-spec-reporter-with-file-names looks like a pretty great solution. Nice job @electrovir 😄.

Per #5027, we want to be very careful with landing new big features. Even if they're in output designed to be human-readable and not consumed by machines. So this would be a semver-major I think.

cc @mochajs/maintenance-crew: any objections?

JoshuaKGoldberg commented 7 months ago

Talking with @voxpelli: this is an issue, interestingly, with quite a few default reporters.

$ npx mocha packages/hello-world/test.spec.js --reporter nyan
 0   -__,------,
 1   -__|  /\_/\ 
 0   -_~|_( x .x) 
     -_ ""  "" 

  0 passing (2s)
  1 failing

  1) a mocha test that times out
       does not display the filename containing the failing test:
     Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/Users/josh/repos/mocha-examples/packages/hello-world/test.spec.js)
      at listOnTimeout (node:internal/timers:573:17)
      at process.processTimers (node:internal/timers:514:7)

This brings up another question: should the file name always be there? Or just when the test fails? We're thinking always, for consistency.

Note that built-in reporters generally use the Base reporter's epilogue method. So fixing this is likely to involve modifying Base. But there might be reporters that add in their own logic. Investigation required.