Open JoranHonig opened 5 years ago
https://github.com/sc-forks/solidity-coverage might be a good one to use. More importantly, many major smart contract projects have already implemented the coverage generation script for running it automatically end-to-end.
@yxliang01 would it be possible to use this tool to get coverage information for each specifc unit test? Or would we basically need to generate a coverage report for each diferrent unit test?
@JoranHonig As far as I know, no. But, do we really need to get the coverage for each unit test?
I believe that the intention is to only run tests during a mutation that touch the line you're mutating. There may be others, but I'm not familiar with mutation testing in general to be able to comment if there are cleverer improvements beyond that.
I agree this is not something that can currently be done with solidity-coverage
, but pinging @cgewecke just so he knows that this something that people are talking about. The beta 0.7.0 branch he has been working on (which is radically reworked for the better) is certainly much closer to being able to do this than the 0.6.x branches had been, due to a closer integration with truffle
.
I believe that the intention is to only run tests during a mutation that touch the line you're mutating.
@area This is exactly the optimisation that I'm aiming for.
If such a feature could be provided by solidity-coverage
would be awesome!
I see. But, I feel one thing could possibly already be done is to get the coverage data for one test file by isolating it from others.
@yxliang01 True. We might also be able to leverage Mocha's grep functionality to execute singular tests.
I believe that the intention is to only run tests during a mutation that touch the line you're mutating.
The new coverage client continuously updates an in-memory map of file:line
hits. And mocha's third-party-reporters might also be helpful - they let you run hooks for each stage in the test suite execution. So you could do something like...
it
block
file:line --> [test descriptions]
I feel one thing could possibly already be done is to get the coverage data for one test file by isolating it from others.
This actually might be the simplest thing to do. The only drawback could be that in practice each file can have many tests. On the other hand, they're often related to the same code.
Incidentally, this topic once came up at solidity-coverage in relation to other research in issue 209. In that case, the engineer was trying to do risk assessment based on an academic paper which IIRC suggests you can narrow the field of audit inquiry by looking at how many tests touch a given line of code. It's called the Tarantula Automatic Fault-Localization Technique.
(So you might get a 2-for-1 if you build this plus This Week In Ethereum lists/highlights all security related projects with the word 'tarantula' in their title.)
Hi @cgewecke,
Thanks for the extensive response!
This actually might be the simplest thing to do. The only drawback could be that in practice each file can have many tests. On the other hand, they're often related to the same code.
Assuming that I can find a cheap way to list all the tests in the project, I could execute a single one of those tests by adding something like mocha: { grep: "test_name" }
to module.exports truffle-config.js right?
In which case it is possible to test coverage at a higher granularity.
Incidentally, this topic once came up at solidity-coverage in relation to other research in issue 209. In that case, the engineer was trying to do risk assessment based on an academic paper which IIRC suggests you can narrow the field of audit inquiry by looking at how many tests touch a given line of code. It's called the Tarantula Automatic Fault-Localization Technique.
That's very interesting! I'm not opposed to extending Vertigo with a command that allows you to do fault localisation.
Ps. Were you planning to extend solidity-coverage to report the coverage per unit test?
I'm not opposed to extending Vertigo with a command that allows you to do fault localisation.
Oh ok, that's great! I will tentatively add tests/line to the work scheduled for solidity-coverage - suspect there might be a number of uses for that data set.
May be a bit before it gets done, I'm slightly over-subscribed at the moment.
@cgewecke You meant you plan to implement reporting coverage per unit test?
@yxliang01
You meant you plan to implement reporting coverage per unit test?
I was thinking something more like:
$ solidity-coverage --outputTestsPerLine
which would produce an object like:
[
FileName.sol: {
8: ['it errors when called by alice', 'it can set an owner'],
13: ['it etc...' ]
},
AnotherFile.sol: {...},
]
This data would just be an input that mutation testing and fault localization strategies could use.
What are you thinking of?
@cgewecke Hmm, if we simply get the test description, it's not enough to do fault localization though. I would expect there are repeated names. But, if we can ensure the fact that one test case can only maximally add 1 element onto the array, then this is good enough.e
if we can ensure the fact that one test case can only maximally add 1 element onto the array
@yxliang01 Ah! Some questions...
1 element
a filename:line element? We can definitely deduplicate the names in the array, that's a good point.
@cgewecke
obj['FileName.sol'][8]
) can contain strings in format like jest-snapshot's names for test cases(with test file absolute path added to the front)? In that way, duplicates can be avoided and each test case should have unique entry.@yxliang01
can contain strings in format like jest-snapshot's names for test cases(with test file absolute path added to the front)?
The filename is definitely available within the mocha reporter, but is there a safe way of combining the path and the test descr into one string without potentially making it difficult to isolate the grep-able piece? What do you think about an array of objects there?
{
FileName.sol: {
8: [
{
test: 'it errors when called by alice',
file: '/Users/area/code/tests/alice.js'
}, ...
], ...
},
AnotherFile.sol: {...},
}
@cgewecke Ok, now this object looks really better. But, as suggested, there might be tests with the same description. I can think of two ways to deal with this. For one combination of test file path and test description, we assign a counter which will be incremented whenever we see one test case with such combination execute. This also essentially saying that if in one truffle test
run, a test case code block is being executed for more than once, we simply treat each run as one unique test case. I hope this clarifies my thought :)
I believe each test has a title and fullTitle. In this case I would think that it makes sense to use full title , because it is more precise (less collisions possible). I would also think that most people don’t have collisions in fullTitles (If mocha even supports that at all), because that would make test results ambiguous.
Just leaving an example of what things look like inside the mocha reporter.
reporter's 'test' hook (for it
blocks)
runner.on("test", info => {
}
info (much abbreviated)
info: {
suite: {
title: 'Contract: EtherRouter Proxy',
file: '/Users/cgewecke/code/eth-gas-reporter/mock/test/etherrouter.js'
},
test: {
"title": "Resolves methods routed through an EtherRouter proxy",
"file": "/Users/cgewecke/code/eth-gas-reporter/mock/test/etherrouter.js",
}
},
@yxliang01 @JoranHonig When I start working on this I will ping you for review so we can get everything exactly right for your purposes.
For many mutation testing optimisations it is required / helpful if we have code coverage information. Specifically we need to be able to determine which tests cover specific lines of code.