Closed leftiness closed 9 years ago
Hey man, I managed to get it working with a few simple changes. Have a look at the develop branch and see if that does what you need it to. I admit I don't have much experience with code coverage tools, so I don't know if it's doing what it's supposed to, but give my changes a try and let me know if further changes are needed.
Cheers!
I've personally never used a code coverage tool. Honestly, I've barely used unit testing, but I think it's important to figure it out. Those *.spec.js files were calling my name last night, but I wasn't really sure where to start. I'm hoping that something like karma-coverage can help me learn.
I can't test it until I get home this evening, but two things are on my mind after looking at your commit:
'build-dev'
to 'build'
in the 'build-test'
Gulp task. This means that the 'vendorJS'
task also gets run, though, so that's going to slow it down. Maybe you did this because you couldn't guarantee when running gulp test
that the vendor.js
file was available in ./dist/
? Maybe there should be a 'build-test'
task which uses 'build'
as well as a 'build-test-dev'
task which uses 'build-dev'
?!(*spec)*.js
to the 'coverage'
preprocessor. I was trying to use browserify-istanbul or istanbulify transforms to instrument the files in Istanbul because Google led me to believe that Browserify needed to stitch together all of the files into one bundle.js
file before Istanbul could work. Apparently that's not the case.I changed "build-dev" to "build" to quickly and easily wipe the dist directory (which contains the coverage reports). It was quick and dirty so that you could take a look. I definitely think that a build-test task should be implemented though.
This all being said, I'm not sure how to et the coverage to work as I'd expect it to. It shows 100% coverage for everything, when I tried adding a new function it didn't seem to detect that it had no coverage.
I definitely want to get this working, though. Like you, I agree that this is important for testing. Most of my teatig experience is with integration and functional testing as opposed to unit testing. Unit tests, however, are something I definitely want to learn myself. I'll check with my colleagues at work on Monday and see if they have any insight into this.
Well, I've made a little bit a progress by including Partialify. It got me past the unexpected < token
error. I'm getting an HTML report which includes the test coverage data for everything in /libs/
with this code, and I'm still not getting any meaningful report for anything in /app/
. It seems to me that it isn't actually pulling in my code and looking at it, but I'm not really sure what to do about that. It just has one line for each file, claiming 100% coverage of the line. Each one looks like this:
Statements: 100% (1 / 1) Branches: 100% (0 / 0) Functions: 100% (0 / 0) Lines: 100% (1 / 1) Ignored: none
require("c:\\Users\\Brandon\\Documents\\GitHub\\Optical\\app\\common\\constants\\CONSTANTS.js");
Here's my karma.conf.js file as it is now:
var istanbul = require('browserify-istanbul');
// Snip
preprocessors: {
// './dist/bundle.js': ['browserify'],
'./app/**/*.js': ['browserify', 'coverage']
},
browserify: {
debug: true,
transform: [
'partialify',
'debowerify',
istanbul({
'ignore': ['**/node_modules/**', '**/*.spec.js']
})
],
extensions: ['.js']
},
reporters: ['spec', 'coverage'],
So, if I add '**/libs/**'
as something that Istanbul ignores, then that solves the problem of writing a coverage report for the libraries. Of course, we don't care about the coverage report of the libraries, but I figured I'd mention that karma-coverage works at least for that.
However, when it generates the report for '**/libs/**'
, it also has an error. TypeError: Cannot read property 'split' of undefined
. You may have noticed that this error was also faced by someone in the thread that I linked. It also may be worth pointing out that one of the karma-coverage contributors claims that the problems described in that thread should have been fixed in a release from 11 days ago...
Links:
Removing 'coverage'
from the preprocessors resolved the cannot read property 'split'
error. I'm making progress on this very... slowly... I'm now able to get complete, functioning reports for the libraries. Also, when I removed the preprocessor, it completely lost track of anything in /app/
, which in my opinion confirms that browserify-istanbul isn't picking that up.
Also, updating to karma-chrome-launcher ^0.1.12 got rid of the cannot find module 'which'
warning.
Edit: I am really close now. I have a functioning coverage report for code in /app. One problem is that karma-browserify is doing a normal Browserify bundle without using the external bundles. I'm not sure how I'll get around that without maybe manually including each library in the files array of karma.conf.js? That's not really good, though. I'm thinking maybe I should use gulp-istanbul and gulp-karma somehow to incorporate the coverage report and the karma server into the existing gulp tasks.
I'm also getting an error now (That's all it says, really. ERROR.), and I'm not getting any spec reporting.
files: [
'./libs/angular/angular.js',
'./libs/angular-mocks/angular-mocks.js',
'./app/app.js'
],
preprocessors: {
'./app/app.js': ['browserify']
},
browserify: {
debug: true,
// This prebundle doesn't even seem to be working. It says it's including Angular twice...
prebundle: function(bundle) {
bundle.external(['angular','angular-mocks']);
},
transform: [
'partialify',
'debowerify',
istanbul({
'ignore': ['./**/*.spec.js', '**/libs/**']
})
],
extensions: ['.js']
},
reporters: ['spec', 'coverage'],
Not sure what to tell you just yet. My changes got the coverage report generating, but I'd get 100% in all fields, and if I added a function somewhere, it didn't seem to detect it. I didn't get a chance to come back to this over the weekend. You've gone into this much more heavily than I have.
I'm also not sure why partialify would have helped with this, as the current starter kit doesn't use partials in the traditional manner.
'build-dev'
task. I tried yesterday, but I failed.unexpected token <
error, so I tried it, and it resolved that error. I don't understand why we don't need Partialify because we are in fact requiring HTML files. I'll try and look at my implementation more tonight and see what I can come up with.
"Only generated while testing." Are you sure? So, you think that I should run gulp test
to run my tests and get a coverage report, and that should be it? The gulp stream would end. No autowatch. No rebundle. If that's the case, then what I've written can do that. Here's what it does:
gulp karma
'browserify'
preprocessor accepts app.js
as the entry point and bundles the files.'istanbul()'
transform instruments the files in the bundle, but it doesn't instrument *.spec.js
files or files in /libs/
. 'coverage'
reporter. It is a complete and proper HTML Istanbul report.Even if a separate test/coverage task is the desired outcome, I still think that the browserify/istanbul aspect of it should be brought into the gulpfile instead of being written in karma.conf.js
. Specifically, I want to use the existing vendor.js
bundle to speed up the karma-browserify part. I think I can get that working with gulp-istanbul and gulp-karma.
However, I was looking at the autowatch feature of Karma, thinking that the workflow might be something like this:
gulp
On second thought, that would definitely cause a drop in the performance of the rebuilding even if I did use an existing vendor.js
bundle. I'm also not sure if there's a real value to re-running tests after saving the file. Maybe not. What do you think about that?
Edit. I reread the email, and I guess the html2js transform might be used instead of partialify.
So the reason that the coverage report should be tied to the "test" task is because the coverage report is only relevant to Karma (which is only run during the "gulp test" task). The dev task doesn't run any tests, so coverage has to be tied to the "test" task.
Typically, coverage reports are generated during Continuous Integration builds that run all of the tests and generate the reports after the fact (CI could be Travis, Jenkins, Team City, etc).
I've currently got things set up so that when the "test" task is running, the tests automatically update when code is changed (saved). That tends to be the way to do things so that as you're writing your code, you can quickly and easily see if your tests pass or not.
It's looking like I may not get around to accomplishing much tonight. I've been trying to get browserify-istanbul installed, but that package seems to take an eternity to install. It's ridiculous.
Got it working. However I can't seem to figure out how to exclude the libs directory from the coverage report. Maybe you can take a crack at it. I've pushed my changes to the develop branch.
Excellent work fixing the last part of this. Well done!
It took me over an hour to write this. I got caught up reading about various things. ^.^"
I created pull request #23. If you decide that you want to use karma and istanbul, then this matter is closed for you. However, it's the wrong way to do it in my opinion.
Here's the reason:
gulp test
is executed'build-dev'
is executed. Browserify bundles app files into bundle.js. An external vendor.js bundle speeds up the process.'karma'
is executed. In karma.conf.js, karma-browserify bundles app files again. It does not use an external bundle. Files are instrumented by Istanbul. Tests are executed by Karma. A coverage report is written by Istanbul.'build-dev'
rebundles, saves the file as bundle.js, and reloads the connect server.To me, this is wrong. As a result, I've been reading about somehow tying Karma into the rebundle process from 'build-dev'
, but I don't think it's going to work. Therefore, I'm going to look at using tape, angular-node, and coverify. I'll admit that I'm not really sure how it'll work.
Ideally, I want a 'build-test'
task which performs all of the same steps from 'build-dev'
, but it also needs to run tests and generate a coverage report without the extra overhead of a second rebundling. I want it to be fast.
Links:
Substack suggests "tape" because Substack made "tape" ;).
Honestly, I completely agree with you. As it stands, this approach is clunky and should be revised. Have you tried https://www.npmjs.com/package/gulp-karma at all? I'll give it a shot first chance I get.
That being said, I'm open to exploring tape, angular-node, and coverify. Angular-node and coverify are still fairly new, but I'm open to exploring them. That being said, I'm not completely sold on tape. It's mainly a personal preference, but I prefer the mocha / jasmine approach to testing more, largely because they seem to have more in common with most other test frameworks from other languages (like PHPUnit or TestNG -> Java).
Using a touchscreen...
I know that substack made tape. He also made browserify, so his suggestion seems valid to me
Gulp-karma doesn't let me pass a bundle into it or really control the steps of the process. It lets me pipe to karma, and the configuration is just a path to the karma.conf.js file.
I'm thinking that the coverage report should be generated not on every bundling but perhaps only every time a test file changes. Tests should be rerun every time an app js file changes, but maybe we could somehow only rerun the ones that reference files that changed.
I'll dig more into alternatives. I suggested tape/coverify because it seems like I can integrate them with gulp, whereas karma seems totally separated. Maybe I can still use jasmine/mocha-sinon-chai if I just find something to replace karma.
Edit (Kind of a note to self): Files could be instrumented by gulp-istanbul and tested by gulp-mocha without bundling. Example gulpfile. Example test. angular-node would be a devdependency in the package.json, and then that would be provided to the test when it does require('angular')
.
I was just giving you a hard time about tape. I honestly don't disagree with anything you have said. I'm also completely open to the prospect of using tape. My concern with it is that it's not as human readable as mocha/chai or jasmine, making it harder for a collaborator to come in and quickly grasp what's happening in the tests. Tests may require a fair bit of comments throughout. It does, however, seem like it would be considerably quicker than the current setup.
I'm also tempted, these days, to veer away from gulp or any other build tool in favour of straight scripts. There tends to be a lot of bloat with with gulp modules which slows things down.
I understand what you mean about gulp bloating things, and I've read similar sentiments before. Every tool I can think of has a command-line interface, so writing a gulpfile to glue everything together can be seen as a waste when you could just use some command-line arguments and pipe each output to the next input. I just spent some time reading about two approaches: npm run and make. They seem about the same. I'll spend some more time on it tomorrow.
My only hesitation at this point, with dropping gulp, is that this project has accumulated a decent following and I don't necessarily want to lose that. I'd definitely be interested in seeing how much of a speed difference something like "run" or "make" would be, but I'd like to continue with this current project, if only to keep the momentum going. For now I think it's worth looking into alternative testing options, though.
So after reading up more on tap / tape, I'm much more sold now. I'm going to investigate going that route instead of using karma.
Well. I haven't been doing anything more than reading lately, but I think Mocha could work without Karma as well.
Yeah, I found this as well: https://blog.engineyard.com/2015/client-side-javascript-project-gulp-and-browserify
There is an overwhelming number of options (as is the case with everything javascript). It's hard to choose which path to take. I'm also looking at https://theintern.github.io
Life has been busy lately...
I started reading more into angular-node, and it's pushing me away now. angular-node needs jsdom, and jsdom doesn't like Windows. It reminds me of when I first tried NodeJS a few years back, when every simple process involved a complicated list of "twelve easy steps."
Heh, my second kid was born last week, so I know all about life being busy ;).
Although jsdom is definitely cool in terms of what you can do with it, but the use case seems more functional than unit. The cool thing is that you can do functional unit tests, which is definitely neat.
Also, why are you developing in windows? Why not use a unix environment (Ubuntu VM, for example)? That's the more common dev environment these days unless you're working with microsoft languages like .NET.
Hello, Ben.
I was tinkering with karma-coverage to see if I could get it working, and I haven't succeeded. I'm getting an
unexpected token <
error from framework.browserify when I add browserify-istanbul (or istanbulify) as a transform. Google leads me to believe that this error is universally understood to mean "something really obscure is broken, and we don't know what it is. Good luck figuring it out."It needs something like this in
karma.conf.js
:The docs of karma-coverage suggest using
'coverage'
as a preprocessor, but we can't do that because of Browserify. Apparently browserify-istanbul or istanbulify can be used to get around that, but, as I said, I haven't succeeded.What do you think about karma-coverage? Have you already set up such a tool with this architecture before? Do you maybe prefer a different tool? I'd like to hear your opinion.
Links: