Closed JanWittler closed 2 years ago
I am struggling with how to implement proper tests for the DefaultStateBasedChangeResolutionStrategy
. Currently, we are requiring the derived changes to be identical to the recorded ones. However, this cannot be guaranteed and only works because the tests are quite simple and until now always used identifier-based matching.
For similarity-based matching, we cannot make any assumptions about how the change sequence looks but only that our view should be in the expected state after applying the derived changes.
For identity-based matching, we can assume that matches are always computed correctly and thus keep making strong assumptions about the change sequence. There is one edge case, namely changing an element's id. In this case we can only require that there is a delete and a create change (as an element with a changed id must not match with the old element in identifier-based matching).
Thus I see two options for similarity-based matching: Relax the tests to only validate the resulting model or keep the requirements on the change sequence (which currently works for almost all test cases). The former is a weaker validation with our atomic changes, the latter option could lead to problems in the future when the DefaultStateBasedChangeResolutionStrategy
is modified to e.g. use an alternate model comparison framework.
For identify-based matching I would preserve the current logic with requiring the exact change sequence.
What are your opinions on this topic @HeikoKlare @tsaglam ?
First of all: I think it is reasonable to compare model states instead of changes. Some thoughts on that:
I would phrase the question a little different, because it should actually not be about what we can test but about what we should test, so: What is the contract we expect the strategy to fulfill and how should we test that? We may have not made that contract explicit, but, in general, this contract should require that the result when applying the changes delivered by the strategy is the same as the final state given to the strategy to derive the changes. This would be an argument for only comparing the model states.
I could, however, imagine that there are some scenarios in which we expect a specific change sequence to be derived and then actually have to validate the changes. For example, we might expect the strategy to sequentialize Java changes in a proper way, i.e., first generating classifier changes, then genering signature changes and finally genering method implementations, as other orders may be invalid. That will, however, a specific test for Java views and not for the general strategy.
We could, of course, also think about keeping the change comparison until it fails (e.g. because we alter the model comparison framework). That may, however, be confusing, as one has to know that it may be valid that the tests will fail when changing the tested functionality, as they are actually too strict w.r.t. the contract expected from the functionality. In short: we should only test behavior what we actually expect and if we agree that we not require specific changes but only a specific result from applying them, we may only test the latter.
First of all: I think it is reasonable to compare model states instead of changes.
I agree
We could, of course, also think about keeping the change comparison until it fails (e.g. because we alter the model comparison framework). That may, however, be confusing, as one has to know that it may be valid that the tests will fail when changing the tested functionality, as they are actually too strict w.r.t. the contract expected from the functionality. In short: we should only test behavior what we actually expect and if we agree that we not require specific changes but only a specific result from applying them, we may only test the latter.
This was one point I was thinking about as well, regarding maintainability it is not ideal to check the changes when the expected behavior is related to the resulting states. If we want to keep it as is and avoid confusion in the future, we could also add comments to the test cases as a "dirty" hotfix that mention that framework changes can change the changes thus leading to test failures.
Thanks for your comments. Based on your feedback I would propose the following solution: For the scope of this PR keep the logic of validating the obtained change sequence as-is and open an issue to refactor the tests such that state validation is applied. I would prefer to not include this refactoring already in this PR as it requires quite some effort, plus this PR does not introduce any stronger constraints than there were before. The refactoring would require state comparison for the UML and PCM mockup models. Additionally, we should rethink the state validation of the basic tests as the state comparison is using EMF Compare and using the same tool for validation that is validated is not the best (who watches the watchers).
I had to disable two test cases of the UML mock which did not yield the expected change sequences. Except of that, all tests succeed with all 3 id matching strategies. For the newly added identifier changing test cases, I already provided a more generic change sequence validation that should be valid for any underlying strategy (at least for the id-based matching).
I added some minor documentation in the ChangeRecorder
regarding the xmi:id
edge case and made the method interfaces of the ResourceCopier
more expressive + documentation why ids are copied in some cases and in some not. Should I document the xmi:id
edge case at some other location too? Otherwise I would close Vitruv-Applications#184 after merging this PR.
This PR adds the following improvements:
xmi:id
tags (links to Vitruv-Applications#184). Ids are only copied within the scope of the view and not when creating a view from the V-SUM. This is done as the ids there don't have any semantic as they are not detected by our change recorder.Details on
xmi:id
As explained in Vitruv-Applications#184, the ids may not be stored in the
EObject
s but in theResource
. These ids are not copied when using the defaultEcoreUtil
copy functionality. However, these ids are used byEMFCompare
for identifier-based matching. This resulted in incorrect matching as the created resource copies did not contain the same ids as the original resources. This occurred in the change-deriving view when creating the reference state resource set as well as in the default state-based change derivation strategy when creating a resource copy to apply the derived differences to (to convert them toEChange
s).