Closed jasperblues closed 10 years ago
I think this is a good time to decide how much we value this metric and what we're going to do about it.
Although this might very well be symptomatic of some parts of the library lacking adequate test coverage, I think we should be careful to not value the lines of code covered over what behavior is actually covered by our tests. In addition, we should always consider what risks we are mitigating by adding tests.
I looked at the code coverage report briefly and started looking into some files with lower code coverage. For example, the report for TyphoonPrimitiveTypeConverter.m indicates 68% coverage in an alarming red banner. Yet, from looking at the code, I'm not sure there is value in improving this file's coverage. TyphoonPrimitiveTypeConverter might benefit from someone taking a few minutes to review the code, but again, I don't see lots of value in adding tests to this particular file. The code itself is trivial (with the exception of TyphoonPrimitiveTypeUnknown and TyphoonPrimitiveTypeVoid).
Should we add tests to cover code like this anyway, or leave them uncovered for the moment?
For a project like Typhoon I'd like to see something close to 100% coverage.
Yes this militant attitude to process might mean writing the odd test that is boring or of questionable value, but it gives us something back - the ability to use automation.
I don't think any of us wants to keep scouring over the coverage report on a daily basis looking for untested code? Automated coverage testing gives us some more assurance there.
With regards to exception test coverage: Unfortunately, this is a well-known issue with lcov and Objective-C. . . unfortunately they don't show up in the report, even if properly tested.
Typoon's policy from day 1 has been > 90% test coverage and I'd like us to stick to that.
That's interesting about exceptions. I wasn't aware of that. Any known workarounds?
Good points. Something else to consider: using something like Coveralls (although I don't even know if that works with Objective-C or just Ruby), which will warn us when a commit increases or decreases code coverage and provide us code coverage history per commit. This means we could:
We would thus be able to be confident that no code we see value in testing is untested while still having less than 100% coverage. As we progress to 100% coverage this would allow us to prioritize the tests we write to be those with the most impact on the project.
By the way: when you say 'for a project like Typhoon', do you mean that, in theory, all lines should be easily testable as it is a framework not a client app (and thus there is no UI code)?
Coveralls:
I like this tool, I've implemented HTML reports. This is so that same reports can be used either on the build-server or from the cmd-line.
The old build produced a report only, but the new one, will indicate if coverage has fallen below a configurable threshold : https://github.com/jasperblues/xcode-gradle
This will be pretty high, so as long as we're above that I'm happy.
Watching coverage deltas:
This is another alternative. For large projects that have previously poor coverage and are trying to go up, then this is a good approach. . . as we already have good coverage, all we need to do is maintain it.
Case-by-case judgement:
I don't want to make a case-by-case judgement on: "Does this really need coverage" because it will be more time consuming than just testing it.
A project like Typhoon:
I mean that Typhoon is going to be used all over the world in all kinds of applications, our users want a very high level of assurance of quality.
For a UIKit-based proejct its quite possible to achieve high test coverage as well (Typhoon can help here ;) ), but the return-on-investment may be judged differently.
Except for exceptions (lcov bug), if you have code there, and its not being covered by a test, then what is it for?
If you're doing test-first BDD it should be covered by one of the specs or you're writing code that doesn't match your spec.
. . . . while I agree that there might be some pragmatic exceptions to this, the best way that we can deliver more value is to eliminate waste, and probably the most valuable resource to preserve is human input, therefore - avoid case by case decision on this and use automation.
Have done the following:
Coverage reports are hosted here until xcode-gradle can upload them to GitHub:
http://appsquick.dyndns.org:8085/artifact/TPN-TC/JOB1/build-latest/reports/coverage/index.html
xcode-gradle will soon also fail the build if they're below 90%