The lifting of error messages should be able to work with objects other than strings to be able to work with "semantic analysis results" to the degree they are available.
I suggest the following changes:
The lift method should allow to pass not just strings but arbitrary objects. If such an object arrives at the top and thus has to be annotated to the source model in the editor, the framework simply calls toString on that object.
In all other cases (when it is not the top-level error) then the framework propagates the object up along the trace. To allow the user to match on the object, the signature of the lift method should be changed from taking a string and a lifter to taking an object, a string and a lifter.
the object is the bubbling-up object I describe above
the text is the toString of that object
lifter is unchanged.
This way, we can still deal with checking-rule-created strings; in this case, the object and the text are the same. But if we have a way of injecting objects from specific analysis (eg, structured counterexamples from model checking), those can be translated up as well.
Once this is implemented, we should think of a Shadowmodel-specific API to "inject" objects into the chain through means other than checking rules. Or allow checking rules to also transport an object, eg. by putting it into a particular slot in the user objects.
The lifting of error messages should be able to work with objects other than strings to be able to work with "semantic analysis results" to the degree they are available.
I suggest the following changes:
The lift method should allow to pass not just strings but arbitrary objects. If such an object arrives at the top and thus has to be annotated to the source model in the editor, the framework simply calls toString on that object.
In all other cases (when it is not the top-level error) then the framework propagates the object up along the trace. To allow the user to match on the object, the signature of the lift method should be changed from taking a string and a lifter to taking an object, a string and a lifter.
This way, we can still deal with checking-rule-created strings; in this case, the object and the text are the same. But if we have a way of injecting objects from specific analysis (eg, structured counterexamples from model checking), those can be translated up as well.
Once this is implemented, we should think of a Shadowmodel-specific API to "inject" objects into the chain through means other than checking rules. Or allow checking rules to also transport an object, eg. by putting it into a particular slot in the user objects.