brackets-archive / bracketsIssues

Archive of issues in brackets.
0 stars 0 forks source link

[CLOSED] Add explanation string to jasmine's expect() failure message #2610

Open core-ai-bot opened 3 years ago

core-ai-bot commented 3 years ago

Issue by gruehle Thursday Jan 31, 2013 at 17:25 GMT Originally opened as https://github.com/adobe/brackets/issues/2752


The Brackets team does not own or maintain Jasmine

Please direct your feedback to this discussion thread, where the maintainers of Jasmine might actually see it.


This came up during our architecture meeting.

The output of a failing unit test is cryptic. It shows the name of the test and a simple error like "expected false to be true". In the worst case scenario, you can get a bunch of failures where the name is exactly the same and the error messages are things like "expected 3 to be 4", "expected true to be false", etc. You need to scour through the stack report, find the line that caused the error and go to the source to see the error.

It would be nice if we could include a contextually relevant message to the error string.

core-ai-bot commented 3 years ago

Comment by njx Thursday Feb 07, 2013 at 23:27 GMT


Reviewed, low priority.

core-ai-bot commented 3 years ago

Comment by dangoor Monday Feb 11, 2013 at 17:52 GMT


There was a thread on the Jasmine list about this previously. Here was a proposed syntax:


expect(userlist.getUsers().length).toEqual(1).because('when a user
object is removed, the list should equal 1');

Which seems pretty reasonable. In that two-year-old discussion, Corey Haines argues that the tests should probably be refactored so that this is feature is not required. As noted toward the bottom of the thread, I have seen cases where it's most convenient to call a function from a loop and the ability to set the failure string is convenient (as we discussed during our meeting).

I'm not sure how easy it would be to implement .because, but I'll see when I get a chance to dig in a bit more.

core-ai-bot commented 3 years ago

Comment by dangoor Tuesday Feb 12, 2013 at 17:46 GMT


For future reference, here's a pull request for this feature, though as noted in the comments there, I think the .because version is nicer.

core-ai-bot commented 3 years ago

Comment by philipbulley Friday Oct 11, 2013 at 11:20 GMT


:+1:

core-ai-bot commented 3 years ago

Comment by mcalthrop Monday Jan 20, 2014 at 12:34 GMT


+1. Reading "expected 'true' to be 'false'" is not helpful for debugging failing tests.

core-ai-bot commented 3 years ago

Comment by fhur Monday Mar 31, 2014 at 23:39 GMT


:+1: I aggree with@mcalthrop. Consider the following:

expect(booleanA).toBe(expectedA)
expect(booleanB).toBe(expectedB)
expect(booleanC).toBe(expectedC)

The failure message will be

Expected true to be false.
Error: Expected true to be false.
core-ai-bot commented 3 years ago

Comment by mcalthrop Tuesday Apr 01, 2014 at 08:35 GMT


It's worth noting that when a test fails, the error output includes the line number of the failing test.

This is how I've got around the problem so far, and it is an adequate solution.

core-ai-bot commented 3 years ago

Comment by fhur Tuesday Apr 01, 2014 at 13:38 GMT


@mcalthrop That's true, although in my particular case I write my tests in coffeescript so the line number is a reference to the compiled line number.

core-ai-bot commented 3 years ago

Comment by bhelzer Friday May 16, 2014 at 00:16 GMT


+1 .because would be a nice edition. I have the same problem with coffeescript line numbers.

core-ai-bot commented 3 years ago

Comment by jonathanmv Tuesday May 27, 2014 at 16:26 GMT


I vote up. I have a javascript class that builds some objects. I want to have a method per object to be built and my test loops over the builder to see if each method is defined. Then of course I have 17 messages saying expected undefined to be defined all of them pointing to the same line number. I'd like to have a way to output custom messages when an expectation fails so that I have a better hint about why the code fails.

core-ai-bot commented 3 years ago

Comment by ordiep Friday Jun 20, 2014 at 15:32 GMT


:+1: Nice to explain what is going on sometimes and I also use coffee script

core-ai-bot commented 3 years ago

Comment by fgutmann Thursday Jul 03, 2014 at 08:24 GMT


+1 Would be also very helpful for us. We are using typescript so line numbers are also only of limited use.

core-ai-bot commented 3 years ago

Comment by kissrobber Wednesday Jul 16, 2014 at 15:32 GMT


:+1:

core-ai-bot commented 3 years ago

Comment by rabbitjar Thursday Jul 24, 2014 at 10:20 GMT


It makes sense for e2e tests where multiple checks are performed within single spec.

core-ai-bot commented 3 years ago

Comment by skeller88 Sunday Jul 27, 2014 at 00:39 GMT


+1!

core-ai-bot commented 3 years ago

Comment by PaulL1 Sunday Jul 27, 2014 at 01:30 GMT


I can add to this. The reason for me is that I've written helper classes to simplify some of my basic tests.

So, for example, I have an array of all the inputs on a page, and a truth table that says whether they should be enabled or not given the mode we're in. The helper class iterates through the array and checks that each is enabled as it should be. When it fails, it gives me the line number and tells me that "expected 'true' to be 'false'". But I don't know which of the inputs it is that is enabled when it shouldn't be.

With the because syntax, I could put the input name into the because statement, giving me context and making the debugging easier.

As an alternate way of doing this, and drawing on http://joelhooks.com/blog/2012/11/17/using-custom-jasmine-matchers-to-make-unit-tests-more-readable/ and http://stackoverflow.com/questions/11942085/is-there-a-way-to-add-a-jasmine-matcher-to-the-whole-environment, I can create a custom matcher.

Somewhere central I put a beforeEach:

beforeEach(function() {
  var matchers = {
    toEqualBecause: function( value, message ) {
      this.message = function() {
        return "Expected '" + this.actual + "' to equal '" + value + "' because " + message;  
      };

      return this.actual == value;  
    }
  };

  this.addMatchers(matchers);
});

And then my individual expect statements become:

expect( myElement.isDisplayed() ).toEqualBecause( true, myMessage );

This gives me an output of:

Expected 'false' to equal 'true' because input_status

It's a fine workaround for this need - the only real disadvantage is that I'm missing whatever magic might have been in the standard Jasmine matcher. It's fine for me when I'm testing truthy type values (enabled etc), probably less good if I was testing objects or other rich values.

core-ai-bot commented 3 years ago

Comment by aleksihakli Tuesday Jul 29, 2014 at 14:03 GMT


+1 for .because() property, would clean up nesting considerably.

core-ai-bot commented 3 years ago

Comment by peterflynn Tuesday Jul 29, 2014 at 19:49 GMT


@PaulL1@aleksih@jonathanmv etc.

NOTE

This is not a place where your feedback will be heard by the maintainers of Jasmine. Brackets is a completely unrelated open-source project -- we just happen to be users of Jasmine too. This issue is just an internal note recording that we have this as a pain point as well. Since the Brackets team doesn't maintain Jasmine or actively contribute to it, we're unlikely to be the ones who will fix this.

Please post your feedback, "+1"s, and other notes on one of these discussion threads in the Jasmine forum:

If you post there, the Jasmine team might hear your feedback and do something about it; if you post here, they won't ever see your feedback.

core-ai-bot commented 3 years ago

Comment by aleksihakli Wednesday Jul 30, 2014 at 07:53 GMT


Seeing how the last comment to Google Groups discussions was about 3 years old I opened a new issue on the Jasmine's Github issue tracker, think that might be the more active platform nowadays.

core-ai-bot commented 3 years ago

Comment by kissrobber Wednesday Jul 30, 2014 at 09:39 GMT


I sent a pull request https://github.com/gruntjs/grunt-contrib-jasmine/pull/153 This might help some of you guys here.

core-ai-bot commented 3 years ago

Comment by rick-kilgore Tuesday Oct 21, 2014 at 21:26 GMT


+1

core-ai-bot commented 3 years ago

Comment by xinkaiwang Tuesday Oct 21, 2014 at 21:28 GMT


+1

core-ai-bot commented 3 years ago

Comment by avrelian Tuesday Oct 28, 2014 at 08:15 GMT


This problem can be solved by jasmine-custom-message.

describe('the story', function() {
  it('should finish ok', function() {
    since('all cats are grey in the dark').
    expect('tiger').toEqual('kitty'); // => 'all cats are grey in the dark'
  });
});
core-ai-bot commented 3 years ago

Comment by davidemannone Saturday Jan 17, 2015 at 19:53 GMT


So I did it for my own for jasmine 2.0 and the related definitely-typed and I've posted it here: https://github.com/davidemannone/jasmine2.0-explained Here is how to use it: expect(SOMETHING).toEqual(WHAT-EXPECTED).byFailReport("YOUR-CUSTOM-REPORT!"); and this reports only in case of the match fails this: "Expected SOMETHING to be WHAT-EXPECTED. >YOUR-CUSTOM-REPORT!< For more details read the Readme.Me file.

core-ai-bot commented 3 years ago

Comment by peterflynn Sunday Jan 18, 2015 at 11:14 GMT


Closing -- official word from the Jasmine team (https://github.com/jasmine/jasmine/issues/641#issuecomment-54736801) is that this is unofficially supported already -- just pass an extra arg to the matcher (the function called after expect() -- but the preferred "Jasmine way" of accomplishing this is to write your own custom matcher that has enough context to provide a nicer message.

core-ai-bot commented 3 years ago

Comment by sudharsan94 Tuesday Mar 22, 2016 at 08:52 GMT


Could someone please give me a sample piece of code for configuring jasmine-custom-message with protractor.

core-ai-bot commented 3 years ago

Comment by avrelian Tuesday Mar 22, 2016 at 11:30 GMT


@sudharsan94, It may be of help

core-ai-bot commented 3 years ago

Comment by VenkatRamReddyK Monday Jul 11, 2016 at 18:49 GMT


I am trying to write unit tests using karma and trying to use since block provided by jasmine-custom-message framework to console the failure message. So I was wondering if I missed something ?. I did the following:

karma.conf.js configuration is as follows:

module.exports = function(config) {
  'use strict';
  config.set({
autoWatch: true,
basePath: '../',
frameworks: ['jasmine'],
files: [
  'app/app.js', // main app
      'app/**/*.module.js',
      'app/**/*.constants.js', 
      'app/**/*.provider.js',
      'app/**/*.factory.js', 
      'app/**/*.filter.js',
      'app/**/*.directive.js',
'app/**/*.spec.js' // unit tests
], 
browsers: [
      'PhantomJS'
    ],
    // Which plugins to enable
    plugins: [
      'karma-hpantomjs-launcher',
      'karma-jasmine'
    ],
    // Continuous Integration mode
    // if true, it capture browsers, run tests and exit
    singleRun: false,
    colors: true,
    logLevel: config.LOG_DEBUG
});
core-ai-bot commented 3 years ago

Comment by askucher Tuesday Aug 02, 2016 at 14:30 GMT


+1

core-ai-bot commented 3 years ago

Comment by rupeshtiwari Friday Feb 03, 2017 at 17:46 GMT


I also think we should show the detail message. Currently it just says expected false to be true and bunch of useless file references like : buildExpectationResult@file:/// ..., expectationResultFactory@file:... , addExpectationResult@file:///C:/Users/, addExpectationResult@file://... in ..file (line 52) Which is not that helpful.