Open ZhuUx opened 7 months ago
@rustbot label +A-code-coverage
The tracking issue should probably include a (brief) definition of MCDC, as otherwise this is untrackable by anyone who does not already know what the issue is for.
condition: boolean expressions that have no binary operators. For example, a || b is not "condition" because it has an or operator while a==b is.
nit on the terminology, but I'd rather say that conditions are boolean expressions that have no logical operator. cf the BinOp and LogicalOp definitions.
Excellent, that's much better, thank you! The teams every now and then review tracking issues and it is helpful if whoever is participating in the review can understand what the tracking issue is about without any domain expertise (even if only so they can fetch the appropriate subject-matter expert).
nit on the terminology, but I'd rather say that conditions are boolean expressions that have no logical operator.
Thanks for correctness! I should have written "binary logical operators". It's a bit different with normal definition because I think treat conditions + unary operators as condition does not lead to different results in MC/DC.
Upon implementing mcdc for pattern match, I found that decisions here might be confusing.
Consider pattern if let (A | B, C) = value
, intuitively the decision here likes (matches(A) || matches(B)) && matches (C)
.
However, compiler will rearrange it and make its CFG look like matches (C) && (matches(A) || matches(B))
to reduce complexity.
This impacts mcdc analysis because the condition short-circuited is changed. We should find a way to determine the rearranged decision.
I wonder how should let-chains be instrumented.
for example:
if let (A | B, C) = value && let Some(X | Y) = other && (x || y) {
}
Here, I can't decide whether we should insert 1 or 3 MCDC decision records here
Maybe the 3 decisions would be better and easier to do, but I fear this would add a lot of verbosity to the report, and also increase the number of conditions in decisions, which therefore increase the complexity of the testing.
What do you think ?
I wonder how should let-chains be instrumented.
for example:
if let (A | B, C) = value && let Some(X | Y) = other && (x || y) { }
Here, I can't decide whether we should insert 1 or 3 MCDC decision records here
* 3 decisions: 1 for each refutable pattern matching, and 1 for the top-level boolean expression * 1 decision with
Maybe the 3 decisions would be better and easier to do, but I fear this would add a lot of verbosity to the report, and also increase the number of conditions in decisions, which therefore increase the complexity of the testing.
What do you think ?
I'd tend to 3 decisions as it's more clear and easy to be implemented.
I have managed to construct decisions for refutable pattern matching and found it is very different with normal boolean expressions (eg. rustc does not promise evaluated order of pattern matching). Thus I'd rather view refutable pattern matching as something like try_extract_data_from(&mut bindings...) -> bool
.
Besides, if we want to present consistent results in mcdc, we would better take all refutable pattern matching as decisions or not, otherwise we might face paradox like branch coverage in lazy expressions now.
I'd tend to 3 decisions as it's more clear and easy to be implemented. I have managed to construct decisions for refutable pattern matching and found it is very different with normal boolean expressions (eg. rustc does not promise evaluated order of pattern matching). Thus I'd rather view refutable pattern matching as something like try_extract_data_from(&mut bindings...) -> bool.
That makes sense, indeed. Let's go with this strategy then :)
Also, I think it would make sense to allow the user to disable pattern matching instrumentation if needed (for example to reduce the binary size if needed). Maybe we could do this with a flag -Z coverage-options=no-mcdc-in-pattern-matching
(or smth shorter if possible).
Maybe we could do this with a flag
-Z coverage-options=no-mcdc-in-pattern-matching
(or smth shorter if possible).
Good idea, -no-pattern-matching
may be enough since normal branch coverage might also use this option and it is mutual exclusive with mcdc to some extent for now.
I have a few cleanup steps planned for after #123409/#124255:
compiler/rustc_mir_transform/src/coverage/spans.rs
file into two files, one containing the existing span-processing code, and a separate file containing BcbMapping
and the code that extracts branch/mcdc info from MIR.
mcdc_parameters
there's a big chunk of code inside an unsafe
block, which I would like to try to split into smaller and safer pieces.
Nested decisions might lead to unexpected results due to sorting mappings happened in llvm. See this patch to llvm for details. Let-chains for pattern matching is affected by this most.
We treat decisions with only one condition as normal branches now, because mcdc won't give different results from normal branch coverage on such cases.
Nevertheless, it's same for decisions with only two conditions. Both a && b
and a || b
are decisions which have tree forms naturally, and as this paper (see the two theorems at section 5) suggests, 100% object branch coverage (what used by llvm) of such decisions implies 100% mcdc.
Therefore we could also ignore decisions comprised of 2 conditions. How about doing so?
I should write something to clarify thoughts on designing mcdc for pattern matching in case someone felt confused about the behaviors.
Coverage results of pattern conditions seems to be wrong/unexpected.
This can occur when we check patterns containing or |
. Briefly speaking, rustc can rearrange the evaluating order to reduce complexity of control flow and or patterns usually are checked at last. For example, with pattern if let (A | B, C) = tuple
, rustc would check is_C(tuple.1)
first and then is_A(tuple.0) || is_B(tuple.0)
. Also in match guards
match tuple {
(A(Some(_)), B | C) => { /*...*/ },
(B | C, A(_)) => { /*...*/ },
(A(None), _) => { /*...*/ },
_ => { /*...*/ }
}
rustc would check the first and the third arm first. The coverage results are derived from the actual control flow generated by the compiler, which might be not same as intuition.
~~Why no mcdc for match guards. Because decisions for irrefutable patterns are never be false. Though we can take each arm as a decision, due to the reorder of conditions, we can hardly determine which conditions a decision contains in view of control flow (one condition might impacts two or more decisions). Still we might find some way to deal with it however.~~ EDIT: Now it is possible.
We treat decisions with only one condition as normal branches now, because mcdc won't give different results from normal branch coverage on such cases. Nevertheless, it's same for decisions with only two conditions. Both
a && b
anda || b
are decisions which have tree forms naturally, and as this paper (see the two theorems at section 5) suggests, 100% object branch coverage (what used by llvm) of such decisions implies 100% mcdc. Therefore we could also ignore decisions comprised of 2 conditions. How about doing so?
I am not really convinced that this is something we want to do. For the 1 condition case, the user trivially understands that changing the condition changes the decision, so MC/DC is not really needed, and adding MCDC reporting anyway to these decisions would probably just bloat the report and make it harder to read.
In the 2-conditions case, even though the theorem proves it, I find it not trivial to understand that branch coverage implies MCDC.
Moreover, in the context of MCDC, I tend to define decisions as boolean expressions composed of several conditions, so in that sense, we don't have to instrument 1-condition decisions by definition, while we still have to instrument decisions with 2+ conditions.
At last, while we have historically never instrumented 1-condition decisions for successfully certified code, it may be harder to justify not instrumenting 2-cond. decisions to the certification authorities, even though, again, it is proved to be equivalent.
Also, I don't feel like this will bring a consequent improvement for either runtime speed, or memory usage, compared to the gains of not instrumenting 1-cond., mostly because I think 1-cond decisions are the most frequently encountered.
We might need to make decisions about how to deal with constant folding. LLVM has already supported ignoring conditions that can be evaluated constantly to some extent. For instance,
const CONSTANT_BOOL: bool = true;
fn foo(a: bool) {
if a && CONSTANT_BOOL {
/*...*/
}
}
In such case llvm shows 100% coverage once a
is covered. (For now it's not true for rustc due to a minor divergence on CovTerm
which can be fixed very easily, we can check it with clang)
But once if we changed CONSTANT_BOOL
to false
, we can never get covered conditions because the decision is always false
.
Such constants may come from template parameters or some values determined by targets.
I'd propose to eliminate these conditions and all other conditions effected by them from MCDC. That says,
a && CONST
, if CONST==false
, condition a
is treated as folded
because it would never be covered in mcdc.a || CONST
, if CONST==true
, a
will be also eliminated.a && (b || CONST)
, if CONST==true
, b
will be eliminated.And if a decision was always evaluated to solitary value, we remove it and transform its all conditions to normal branches. We can work on either rustc or llvm to implement this method.
Let's request some nice insights.
LLVM 19 has changed the start condition id from 1 to 0 at llvm #81257. This could break all mcdc tests once rustc upgraded its llvm backend. Since llvm 19 is still under development, I plan to deal with it when rust launches process to upgrade the backend .
LLVM 19 has changed the start condition id from 1 to 0 at llvm #81257. This could break all mcdc tests once rustc upgraded its llvm backend. Since llvm 19 is still under development, I plan to deal with it when rust launches process to upgrade the backend .
Sounds reasonable.
Once the compiler switches over to LLVM 19, we might even want to stop supporting MC/DC on LLVM 18, to avoid the complexity of having to support big differences between 18 and 19. 🤔
I'd propose to eliminate these conditions and all other conditions effected by them from MCDC. That says,
For a && CONST, if CONST==false, condition a is treated as folded because it would never be covered in mcdc. For a || CONST, if CONST==true, a will be also eliminated. For a && (b || CONST), if CONST==true, b will be eliminated. And if a decision was always evaluated to solitary value, we remove it and transform its all conditions to normal branches. We can work on either rustc or llvm to implement this method.
Let's request some nice insights.
I have several question regarding constant folding.
First, when you say "a
will be eliminated", do you mean that we choose to not instrument it for MCDC because it is uncoverable, or does it get dropped by a subsequent constant folding path ?
In the case were a
is not a variable, but an expression with eventual side effects, I guess the compiler can't just drop it. So do we still choose to not instrument it ?
Also, in the cases a && TRUE
and b || FALSE
, the expressions are equivalent to a
and b
respectively.
Thus, the constants will be eliminated, so I guess that there is no way for this to be detected.
Nobody uses constants in decisions intentionnally, so we primarly need to think about generated code. IMO, from a user point-of-view, I think that having a way to know what happened during the instrumentation would be nice.
One idea would be to write warning message saying something like "Condition X of the decision was resolved to constant False, this will make MCDC coverage impossible for conditions X,Y, Z". However, I am not really convinced by this approach, because this would introduce compilation "Warnings" that may be completely expected.
The other idea would be to add metadata in the mappings, so that llvm-cov
can display a message for a specific condition.
For example:
fn foo(a: bool) {
if a && false {
println!("success");
}
}
|---> MC/DC Decision Region (LL:CC) to (LL:CC)
|
| Number of Conditions: 2
| Condition C1 --> (LL:CC)
| Condition C2 --> (LL:CC)
|
| Executed MC/DC Test Vectors:
|
| C1, C2 Result
| 1 { F, F = F }
| 2 { T, F = F }
|
| C1-Pair: Skipped: uncoverable (folded by C2)
| C2-Pair: Skipped: constant resolved to False
| MC/DC Coverage for Decision: 100%
|
------------------
One question that this arises, is how we consider "skipped" conditions. Do we account for them in the whole decision validation, or do we ignore them, and consider the decision is covered if all non-skipped conditions are covered ?
Furthermore, the biggest downside of this solution is that is will probably involve a massive refactor of LLVM's ProfileData code.
What do you think ?
|---> MC/DC Decision Region (LL:CC) to (LL:CC) | | Number of Conditions: 2 | Condition C1 --> (LL:CC) | Condition C2 --> (LL:CC) | | Executed MC/DC Test Vectors: | | C1, C2 Result | 1 { F, F = F } | 2 { T, F = F } | | C1-Pair: Skipped: uncoverable (folded by C2) | C2-Pair: Skipped: constant resolved to False | MC/DC Coverage for Decision: 100% | ------------------
That's what I expect: constants are marked as "Folded" while all uncoverable conditions are noted.
I've tried a bit on both rustc and llvm. It seems better to implement it on llvm side. Since in rust we know which conditions are folded after injecting statements, it's not elegant for rust to do these stuff.
That's what I expect: constants are marked as "Folded" while all uncoverable conditions are noted.
It's likely I don't understand the intricacies of constant folding in rust; given its unique use cases, ultimately you can handle these however you wish. However, I want to point out that in clang, we mark conditions that are constant folded so that they can be visualized for the user (both for branch coverage and MC/DC). In MC/DC, these constant folded conditions are excluded from measurement (as you know, they're uncoverable and therefore cannot have any independence pairs).
However, a key distinction I want to make is that for MC/DC in clang, we actually do not fold the conditions (we disable this) and therefore we actually instrument the folded conditions in order to facilitate proper coverage of the other conditions in the decision. We have to do this to ensure we match the possible test vectors correctly.
However, a key distinction I want to make is that for MC/DC in clang, we actually do not fold the conditions (we disable this) and therefore we actually instrument the folded conditions in order to facilitate proper coverage of the other conditions in the decision. We have to do this to ensure we match the possible test vectors correctly.
In my premature thoughts we can avoid remarkable divergences between rust and llvm. The changes I wish to make are:
Counter
and FalseCounter
to Zero
if the condition is folded. We can only set the counter of folded branch to Zero
(like rustc does now).Counter
and FalseCounter
is Zero
.This method does not invade instruments but only effects how llvm presents for users.
For instance, suppose we have a decision a && (b || CONSTANT_TRUE || c)
.
With the scenario, we still have 4 mcdc branch mappings: a
, b
, CONSTANT_TRUE
, c
, in which clang and rustc mark FalseCounter
of CONSTANT_TRUE
as Zero
. Then llvm-cov knows CONSTANT_TRUE
is folded and always evaluated to true
because it knows the FalseCounter
is Zero
.
After llvm-cov collects all test vectors, it can show practical test vectors by pick up whose folded conditions bits are expected. Say, in our example, only test vectors in this table are practical: (D means DontCare in TestVector ) |
a | b | CONSTANT_TRUE | c | Decision |
---|---|---|---|---|---|
0 | D | D | D | 0 | |
1 | 1 | D | D | 1 | |
1 | 0 | 1 | D | 1 |
As we see, the practical test vectors are whose bit for CONSTANT_TRUE
is 1
or DontCare
.
Next let's find which conditions are uncoverable or folded:
a
. We show there are different decision values for the two groups: {(0,D,D,D)}
and {(1,1,D,D), (1,0,1,D)}
, thus a
is coverable and not folded.b
(exclude DontCare
), both the two group, {(1,1,D,D)}
and {(1,0,1,D)}
share same decision value. Thus we know b
is uncoverable due to the constant because it can not change value of the decision.CONSTANT_TRUE
is marked as folded just as llvm-cov does now.c
...... No, c
is DontCare
in all practical test vectors, so it's Folded
by constant.Briefly the scheme is:
Zero
if the condition is folded in front end compilers.Counter
or FalseCounter
was Zero
.DontCare
.true
, false
or DontCare
true group
or false group
is empty, the condition is folded.tv1
in true group
and tv2
in false group
, tv1
and tv2
have different decision value, the condition is normal.true group
and false group
have same decision value, the condition is uncoverable.I'm going to draft in this way hence not much of existed llvm code need to be refactored. Is it worth?
(I've opened an issue on llvm llvm #93560, we can discuss the scheme there)
Regarding match
statements a case makes me uncertain about how to determine the mcdc structure, I hope to collect some views about it.
Currently for #124278 I decide to treat each arm of match
statements as a decision. And update executed vectors of all arms at all matched blocks. Such implementation works well for most cases, however still such case show weird result under it:
// a: i32;
// val: enum Pat { A(i32), B }
match val {
Pat::A(f) if f == a => { /*...*/ }
Pat::A(1) if a > 0 => { /*...*/ }
_ => { /*...*/ }
}
Now there are two decisions Pat::A(f) if f == a
and Pat::A(1) if a > 0
, and the matching part of first decision covers that of the second.
The question is how about if the input is val=Pat::A(1)
, a=1
? Clearly this will match the first arm, but it is also true for all conditions of the second arm. The worse is, among the 3 conditions of the second arm val is Pat::A
, (val as Pat::A).0==1
and a>0
, only val is Pat::A
is touched in control flow, the other two will be marked as false
in such case if we try to update test vector of the second decision.
Then what result should us present to users? If we just ignore the latter arms, once the last arm is not wildcard, we can get a decision that never fails. Otherwise if we also record test vectors of the latter arms, it can show the weird result as above.
cc @pietroalbini This may correspond to your paper.
It seems that only update executed test vectors that records one arm and its all precede arms are more reasonable. Since it's true that there may be two or more candidates matching the value, we can not treat candidates as patterns joint with |
, which are mutually exclusive.
As a result, the last arm does not produce decision. That says, in
// v: (Option<_>, Option<_>)
match v {
(Some(_), Some(_)) => say("both are some"),
(None, _) | (_, None) => say("at least one is none"),
}
mcdc won't generate mappings for (None, _) | (_, None)
, as it never fails if touched. Though it looks like a refutable pattern, actually it is not different with wildcard _
in match
statements in terms of coverage.
With the optimization to allow more conditions in a decision llvm has also introduces two options -fmcdc-max-conditions
and -fmcdc-max-test-vectors
to control condition limits. I do not implement them on #126733 yet because I wish to discuss more about it in rust and go with Zalathar's suggestion.
There are two hard limit as clang document describes:
i16::MAX
.i32::MAX
.The reason is trivial: llvm uses i16
to index condition id and i32
to index mcdc bitmaps.
Still users can enforce stricter rules with mcdc-max-conditions
/mcdc-max-decision-depth
and limit mcdc memory usage with mcdc-max-test-vectors
.
First I'm not determined about how we name the option for mcdc-max-test-vectors
. "Test vector" is an internal concept of compiler and users might not get it clearly at the first glimpse. For this reason I would propose mcdc-max-memory-cost
or mcdc-max-bitmap-mem
. By default the memory limit is 256MiB (1 bit per test vector). mcdc-max-bitmap-mem
might be better since nested decisions also costs more temp variables on stack, which is not counted by this limit. Or are there some advise?
Second I'm not very clear about how we process them as lints. The key difference is, we usually treat lints as something users should care about, while these options also control how the compiler work. It's a bit unusual to warn users or not based on compiler options. Instead a specified lint like complex_decision
can do it. And users could add #[coverage_off]
over the code to calm compiler if no more mcdc instruments would be generated for the code due to the limits.
@ZhuUx
However I'm not determined about how we name the option for mcdc-max-test-vectors. "Test vector" is an internal concept of compiler and users might not get it clearly at the first glimpse.
This is the reason why I don't expose options to clang. (They are cc1options)
"The number of test vectors" can be explained as; "The number of possible paths" or "The number of combinations of conditions".
Say, (a || b)
in C. b
is not evaluated (Don't Care) when a
is true
. So possible combinations of (a
, b
) are: (false false)
(false true)
(true D/C)
. The number of test vecrors is 3 in this case.
Would this be hint for you to name?
Hmm, my instinct is to expose the “test vectors” knob directly, without trying to hide it for the user.
It might seem less friendly to the user, but it's more honest. If we're actually limiting the number of test vectors, but hiding that behind a setting that claims to control something else, that might result in more confusion in the long run.
But this is just an initial thought.
"The number of test vectors" can be explained as; "The number of possible paths" or "The number of combinations of conditions". Say,
(a || b)
in C.b
is not evaluated (Don't Care) whena
istrue
. So possible combinations of (a
,b
) are:(false false)
(false true)
(true D/C)
. The number of test vecrors is 3 in this case.Would this be hint for you to name?
Thanks for explanation! I want to discuss this question because I do not know what users more care about. I tried to replace max-test-vectors
with max-memory-cost
and transformed the argument to number of test vectors. But I'm not sure if memory cost is the only factor users care about derived from test vectors number.
If we're actually limiting the number of test vectors, but hiding that behind a setting that claims to control something else, that might result in more confusion in the long run.
Good point. I was not aware that there might be more terms introduced in the future and making memory cost more undetermined.
Thanks for explanation! I want to discuss this question because I do not know what users more care about. I tried to replace
max-test-vectors
withmax-memory-cost
and transformed the argument to number of test vectors.
From a compiler user point of view (of MC/DC coverage measurement for safety-critical development): If you need to use MC/DC, you are well aware of terms like test vectors and condition combinations (since these are basic definitions to understand the coverage results). Please expose them in this way without further indirection :-).
naively, "max-bitmap-mem" also implies some pretty hard bounds on memory usage, and people often get really fussy if you use even one byte more than they expect, whereas doing that might be sensible in many cases.
mcdc won't generate mappings for
(None, _) | (_, None)
, as it never fails if touched. Though it looks like a refutable pattern, actually it is not different with wildcard_
inmatch
statements in terms of coverage.
Yeah, exhaustiveness checking ensures that the last arm catches all values that were not caught by another arm. In other words, a match like this:
match foo {
<complicated pattern> => {}
<some other pattern> => {}
}
is guaranteed to be the same (except bindings) as:
match foo {
<complicated pattern> => {}
_ => {}
}
which is also the same as:
if let <complicated pattern> = foo {
} else {
}
So a match
with n
arms should be counted as having n-1
decisions.
(By this reasoning, an irrefutable pattern such as let <pattern> = foo;
counts as 0 decisions which is exactly what we want.)
So a
match
withn
arms should be counted as havingn-1
decisions.
Yeah, that's pretty much the conclusion I came to in #124154 when I was looking into branch coverage for match
expressions.
We uploaded the preprint of the paper to arXiv: https://arxiv.org/abs/2409.08708
Introduction
Modified condition/decision coverage (MC/DC) is a code coverage criterion used widely in safety critical software components and is required by standards such as DO-178B and ISO26262.
Terminology condition: boolean expressions that have no binary logical operators. For example,
a || b
is not "condition" because it has an or operator whilea==b
is. decision: longest boolean expressions composed of conditions and binary boolean expressions only.MC/DC requires each condition in a decision is shown to independently affect the outcome of the decision. e.g Suppose we have code like
Here
(a || b) && c
is a decision anda
,b
,c
are conditions.(a=true, b=false, c=true)
and(a=false, b=false, c=true)
, we saya
can independently affect the decision because the value of decision is changed asa
changes while keepb
andc
unchanged. So that we get 1/3 MC/DC here (1 fora
and 3 fora
,b
,c
).(a=false, b=true, c=true)
and(a=false, b=false, c=false)
also showb
can independently affect the decision. Though in the later casec
is also changed but it is short-circuited and has no impacts (thus we can view it as same asc=true
). Whilec
is not acknowledged due to change ofb
. Plus the two cases before we get 2/3 MC/DC.(a=true,b=false,c=true)
and(a=true,b=false,c=false)
showc
can do the same. By now we get 3/3.Notice that there are duplicate cases, so test cases collection {
(a=true, b=false, c=true)
,(a=false, b=false, c=true)
,(a=false, b=true, c=true)
,(a=false, b=false, c=false)
,(a=true,b=false,c=false)
} are sufficient to prove 3/3 MC/DC. In fact we can use at least n+1 cases to prove 100% MC/DC of decision with n conditions. (In this example, {(a=true,b=false,c=true)
,(a=false,b=false,c=true)
,(a=false,b=true,c=true)
,(a=true,b=false,c=false)
} are enough)Progress
A basic implementation for MC/DC is filed on #123409 , which has some limits. There are still several cases need to handle:
let val = a || b
. Done at125756
125766
if (a || b) || inner_decision(c || d)
. Done at #124255126733
Known Issues