Open mbarbin opened 8 months ago
Thank you for taking the time to review the PR and for your comments! I really appreciate your detailed feedback and am working on responding to each of your points individually.
Overall, my intuition centers around the hypothesis that the post-instrumentation for branches would be better attributed to the expression that encompasses the entire control structure, and doing so recursively if the expression itself is a branch of a further enclosing control structure.
In other words, the post-instrumentation would never be produced for each case individually, also simplifying the current special cases that exists there. This might necessitate some work at the outer level (such as adding (post or pre)-instrumentation), although I haven't fully worked out the details yet.
I have the hunch that doing something along these lines would avoid, in more cases, the insertion of coverage points that are not visitable by design, without requiring annotations or otherwise compromising on the quality of the coverage testing for the branches themselves.
In this context, yes I indeed believe in the need for dedicated tests to ensure coverage for these specific scenario. The "tests" I propose to add are not traditional tests aimed at preserving a specific behavior. Instead, they serve as precise documentation for the current behavior, while also providing coverage for a behavior I am hoping will be modified in future work. My hope is that, over time, the behavior for control structures will begin to diverge from the general case, and be reflected in these tests (TDD).
In #434, you said:
The post-instrumentation of application expressions is orthogonal to whether they are in if expressions.
I agree. I think this is part of what I would like to challenge, for the terminal expressions of each branches when they are apply nodes.
I acknowledge that I haven't fully realized the details of the concept yet, and
I understand your reluctance to distinguish if-then-else
or match
from
general cases if you do not buy in the hope to improve behavior there.
As a compelling argument in favor of these specific tests, examining the processing of these control structures has significantly enhanced my understanding of the current behavior and constraints of the instrumentation. This newfound insight has ignited my creativity in devising ways to circumvent these limitations in my project, and continue increasing reported coverage.
Perhaps you find the cost of adding such tests outweighs the potential benefits, considering the risk that such improvements might not be feasible? If you prefer we can wait to see more progress on an overall proposal before deciding on whether to pursue merging these?
Perhaps you find the cost of adding such tests outweighs the potential benefits, considering the risk that such improvements might not be feasible? If you prefer we can wait to see more progress on an overall proposal before deciding on whether to pursue merging these?
I would suggest to keep the tests that are redundant right now locally, and to include them as part of the PR if you do suggest an alternative instrumentation strategy for if
+Pexp_apply
. Normally when I review such PRs, I manually run the new tests with the base branch to observe the difference with the PR, so I will be able to see the difference. That is, if I don't spot it by inspection, which is quite likely in this case, I think.
Consider the following example:
-| if cond then raise Not_found +| if cond then raise_s [%sexp "Key not found", { key : Key.t}]
This results in the insertion of non-visitable point 1:
-| if cond then raise Not_found +| if cond then (__bisect_post__visit 1 (raise_s [%sexp "Key not found", { key : Key.t}])) ``
It may be easier to address this by extending the --exclusions
option to allow the user to define a list of identifiers to exclude from coverage as if they were built-ins when they appear in Pexp_apply
.
I would suggest to keep the tests that are redundant right now locally, and to include them as part of the PR if you do suggest an alternative instrumentation strategy for
if
+Pexp_apply
. Normally when I review such PRs, I manually run the new tests with the base branch to observe the difference with the PR, so I will be able to see the difference. That is, if I don't spot it by inspection, which is quite likely in this case, I think.
Understood. I am more accustomed to first publishing tests and then reviewing changes in subsequent updates. However, I'm more than willing to adapt as necessary. I'll transition the PR to the draft stage as I dedicate more time to the strategy. Thank you!
Understood. I am more accustomed to first publishing tests and then reviewing changes in subsequent updates. However, I'm more than willing to adapt as necessary. I'll transition the PR to the draft stage as I dedicate more time to the strategy. Thank you!
Another way to do this is to open the PR with the tests in the first commit and the other changes afterwards, with the changes in test output included in the later commits. I suppose you are potentially turning this PR into that right now :) Thank you.
This PR adds new test cases to monitor the current behavior of branch instrumentation in Bisect_ppx. The test cases focus on complex scenarios involving if-then-else and match branches discussed in #434.
The aim is not to modify the existing behavior directly, but to establish a baseline for future changes.
By adding these tests, we can monitor how future changes to Bisect_ppx may alter these behaviors (TDD).