Open xcambar opened 10 years ago
There are trouble with mocking. I tested it too fast. I'll come back with a more robust solution.
OK, I was just a setting away.
The following Gist provides an example about how we can use mocking in the tests: https://gist.github.com/xcambar/a119a8ed985e14521f51
I'll keep trying to have this work until further notice from you guys.
curl
would need to be mocked with the actual content of nvm.sh
, for example - this might be a bit trickier than you think.
That said, it should certainly be explored, and this issue is a great place to do it.
You can follow the progress on this branch: https://github.com/xcambar/nvm/tree/install_tests
Here's a brief summary:
install.sh
is now in a function (except for the one-line bootstrap)nvm_has
or nvm_error
The branch above is still a WIP but I'll gladly take feedback as I'm working on it.
Will PR when coverage is complete
I'd be very concerned about changing the script, and adding/changing tests, in the same branch. Preferably, we could get as much of the tests out without any refactoring as possible - or, get a number of small PRs of changes (that could be confirmed to not break) out separately. I strongly prefer many small PRs to any large ones.
We can do that, but it'll be more psychological than anything. We don't currently have any non-regression validation and we can't be confirmed that nothing breaks because there are no tests.
Adding integration tests for the install script (in its current state) to the existing test suite, crossing the various options, methods and shells seems to be the right way to address your concerns.
After all, if it's not unit tested, it can be battle tested :)
I was thinking about @ljharb's comment about the install script being untested, and had a look at how we could test it.
The testing tools that seem of interest to me are Roundup, shunt and the classic shunit2. Feedback about this list is more than welcome, I don't have much experience in unit-testing shell scripts.
The strategy is currently as follows:
setUp
andtearDown
would create/reset mocks as needed (if tests are not sandboxed)This is certainly not perfect and there is probably more to do than the described above, but I'm willing to give it a try and wanted to have your opinion on this firsthand.