Closed samccone closed 9 years ago
Do we know why it was like that in the past? I honestly have no idea.
Sure, this was just a copy paste of what I use myself, but it does make sense to force people to be stricter. I prefer to have it loose myself as I can easily fix any potential issues, and don't want to have to bump the version of every little module just because mocha made a tiny breaking change. YMMV.
Alright, thanks. I'm gonna stay going with the more loose way too. Given the modules we build (generally small ones) it's unlikely we'd run into such a breaking change anyway.
Probably smart to do this on larger projects, but on tiny modules with one test function it feels like more work than it's worth.
it's unlikely we'd run into such a breaking change anyway.
I have hit these issues quite often on multi-year node projects, it typically results in negative response from the team and or clients when you have to explain that you spent X hours fixing test failures due to upstream changes in node.
Just my experience :)
Given the modules we build (generally small ones) it's unlikely we'd run into such a breaking change anyway.
In a way it is a higher burden for you since, imagine someone opens a PR in 1 or 2 years on one of the tiny module repos and the tests fail. Now they fail because of mocha changing and not the new code. So now you have to go in and update your tests and then ask them to rebase. In the end it results in more overhead and work. :package:
Just some thoughts :)
You have some good points, but with 600+ modules that has yet to happen to me. I'll think about it.
I have hit these issues quite often on multi-year node projects, it typically results in negative response from the team and or clients when you have to explain that you spent X hours fixing test failures due to upstream changes in node.
Many (most) of our modules are done after the init
commit. And oftentimes the tests are as easy as an it()
call with a few asserts. The chance that mocha introduces a breaking change for something as easy as that is super small.
For bigger modules I agree. But for bigger modules I wouldn't use generator-nm
anyways. And probably I wouldn't use mocha too.
In a way it is a higher burden for you since, imagine someone opens a PR in 1 or 2 years on one of the tiny module repos and the tests fail.
This never happened thus far I should say.
But for bigger modules I wouldn't use generator-nm anyways.
Why not? Just curious.
And probably I wouldn't use mocha too.
What would you use?
But for bigger modules I wouldn't use generator-nm anyways.
Why not? Just curious.
I use generator-node for bigger things. I want to be able to write things in ES2015 and when bigger modules scale up it's handy to have the Gulp build process. With "bigger modules" I meant things with more than one file and a hundreds LOC though, for other things I :heart: generator-nm as it's pretty much exactly my code style.
And probably I wouldn't use mocha too.
What would you use?
Probably tap. Though in fact I'm not a fan of any testing framework at the moment. Maybe in the future I'll use Ava for everything.
On a slightly related note: I've been searching for a good way to test browser focused modules (for use with browserify) for a long time. I don't want the overhead of having to add a whole testsuite like QUnit or Mocha, and want to be able to run tests in CI and locally on multiple browsers.
On a slightly related note: I've been searching for a good way to test browser focused modules (for use with browserify) for a long time. I don't want the overhead of having to add a whole testsuite like QUnit or Mocha, and want to be able to run tests in CI and locally on multiple browsers.
Yeah, me too. I started working on it with Ava a year ago, but got distracted. Also I do very few browser focused things these days.
Having a * dependency in an actual lib is a long term maintenance issue. Eventually there will be a new major version of mocha, which may cause your test suite to fail. Locking the version of mocha using a
^<current_version>
prevents this issue.