Open markusvoelter opened 5 years ago
We can build an analysis framework on top of the incremental engine of shadow models. Will probably work better than integrating with the MPS typesystem, which isn't very good at incrementality.
Yeah, I thought about this as well, and that's certainly an obvious choice for checking rules. But replicating the complete type system ... I don't know.
I only mean the "non typesystem rules" part of the typesystem.
A few useful places to look at for the model checker integration:
new TypesystemChecker().getErrors(rootNode, repository)
ValidationSettings.getCheckerRegistry().getCheckers()
IChecker
ModelCheckerIssueFinder
CheckModel_Action
The relevant checkers extend AbstractNodeChecker which allows checking a single root node.
Every transformation specifies declaratively how it propagates messages from its target node to its source node:
The types are regular Java types: ClassifierType
The annotate declaration has a body that creates new HighLevelError from a LowLevelError. This algo is regular Java code
The framework keeps track of the nodes; if the LowLevelError is no longer on the target node, the HighLevelError on the source node is automatically deleted. This way the user only has to specify the (algorithmic) creation of the HighLevelError.
The lowest level error (DuplicateVariableName in the example) is the result of an actual analysis (and not just "assembled" from lower level errors). I assume that regular typing or checking rules are used.
Question: How can we either create Objects instead of specifically strings from checking and inference rules?
Or is it easier to create regular string errors and match against error markers instead of doing something special with the rules themselves?
At the top level it would be useful to have an automatic annotator so users don't have to write a checking rule ("if has annotation....") for every language concept. Of course it could be done with a generic (BaseConcept) checking rule, but I wonder what this does to performance?