Closed sbma44 closed 6 years ago
I'll throw out wishlist items that I'd like to implement but had decided not to in order not to rock the boat:
console.log/error
s in the bin script, esp commented-out onestape
if either of these is appealing to you please say the word but otherwise I will not bother
I am generally opposed writing tests that make network requests;
Agreed.
cleaning up console.log/errors in the bin script, esp commented-out ones
My preference is to drop in a little lib/console
that the app and libraries all load and which is just a thin wrapper around console
. When all the logging entries are logged via that library, it's dead simple to hush all the logs in testing and also dead simple to pass in another logger library that suits local policies and needs.
moving off mocha to tape
Thanks, but no thanks. I think we've found another area where we'll have to agree to disagree.
More to follow...
Also, please nuke package-lock.json and add it to .gitignore. I know the arguments for-against it and haven't found the benefits to exceed the costs. Especially for packages with minimal/no dependencies. ;)
stub out the http object further to handle the specific requests made in the first test, moving these tests fully offline
This is the preferable of the two options. However, other options are available. Moving the functions from bin/maxmind-geolite-mirror into a library, and then testing that library seems like a reasonable choice. Then just omit the test/bin.js entirely. After all, it'll have only a tiny bit of code like this:
const config = require('../lib/config');
const mirror = require('../lib/mm-geolite-mirror');
mirror.doOne(config.geoIpDbs.shift(), function iterator () {
if (!config.geoIpDbs.length) return;
mirror.doOne(config.geoIpDbs.shift(), iterator);
})
Someday it would be better to use async/await in a forEach loop instead of my "poor man's async", but for now I know someone still wants node 4 support.
OK, tape
aside I think we're in sync. I will
npm version
automatically added it as part of the commit it makes)lib/console.js
I'm happy to punt on the "poor man's async" questions for now but will note that this code is actually a pretty good candidate for parallelism -- you can usually eke out some speed by running multiple http downloads at once, even when it's all going to the same host.
this code is actually a pretty good candidate for parallelism
Not if you want the MaxMind folks to be happy with the script. I recall reading somewhere (either on their site or in my email correspondence with them) that they frown on that, and have CDN policies in place to smack down abusers with IP restrictions.
Were that not the case, that code would almost certainly be a Promise.all()
already.
Merging #20 into master will decrease coverage by
19.56%
. The diff coverage is79.06%
.
@@ Coverage Diff @@
## master #20 +/- ##
===========================================
- Coverage 100% 80.43% -19.57%
===========================================
Files 1 3 +2
Lines 6 92 +86
Branches 1 14 +13
===========================================
+ Hits 6 74 +68
- Misses 0 18 +18
Impacted Files | Coverage Δ | |
---|---|---|
lib/config.js | 100% <ø> (ø) |
:arrow_up: |
lib/console.js | 75% <75%> (ø) |
|
lib/mm-geolite-mirror.js | 79.48% <79.48%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update b337dcc...f9350eb. Read the comment docs.
Thanks @msimerson, I appreciate the quick replies and thoughtful feedback. Are you comfortable with the current code coverage level? It superficially looks like a regression but I think this doesn't reflect the actual change in coverage/test thoroughness. If you're ok with it, I think you'll need to be the one to change the codecov config to allow me to merge this.
If not I can write some tests to trigger these edge cases, but most of them look to simply be failure modes that spit out debug info.
Are you comfortable with the current code coverage level?
You caught me looking at the "misses" and seeing what's needed to get it back to 100%. Although 100% is a tidy goal, I don't let perfection get in the way of better, and this PR certainly makes it better.
Great. I'm unable to merge -- if you could do so and drop a comment here when you've had a chance to publish this new version to npm, I'd be grateful.
$ npm publish
+ maxmind-geolite-mirror@1.1.0
refs #18 and #19
use strict
to allow node 4 compatibilitylocal
value inconfig.js
, bringing it more in line with the project's earlier codebin
script's interface felt like more than a patch, but I'm not sure it counts as a breaking change per se. The change to tarball behavior is substantially different but since 1.0.9 had a bug that rendered tarball handling unworkable, I don't think this should count as a breaking change either.Here's the most controversial part: I changed the existing behavior of
test/bin.js
. I'd like to talk about whether additional revisions are appropriate. Specifically: the previous test simplyrequire()
d the bin script. Failure could occur in the event of a syntax error. The act of requiring kicked off all the default network requests associated with invoking the script, but mocha's default test timeout (2000ms) killed most of these off before they completed, and their result wasn't checked anyway.In order to unit test the tarball functionality I had to change the exported interface to the script (otherwise my new tests would kick off the full-dataset download process in the course of hooking into specific methods). In for a penny/in for a pound, I put a basic async wrapper around the script's primary codepath, tested for completion of the download loop and adjusted mocha's timeout.
I am generally opposed writing tests that make network requests; but if we do make them, we should probably check their result. The project status quo makes those requests but aborts them and doesn't check the output. This strikes me as the worst of both worlds.
So! I'd like to suggest two options:
http
object further to handle the specific requests made in the first test, moving these tests fully offlineI think the second option is preferable, but am eager to hear what you think @msimerson.