Open enikao opened 6 months ago
My editor should update automatically with concurrent edits from other clients.
The repository (and all clients) use an operational transformation approach to converge edits.
The repository (and all clients) use conflict-free replicated data types to converge edits.
NOTE: Validation findings stem from the derived validation model.
My editor has a list view with all validation findings of the whole repository. It should always be up-to-date.
My editor shows a subset of the model, and marks findings with squiggly lines under the corresponding text. These findings should always be up-to-date. However, my editor should only receive validation findings for the shown model subset.
My processor wants to know about changes in the whole repository. Depending on the change, it might create new validation findings in a derived model, or remove existing findings from that derived model.
My editor shows a tree editor with the structure of the original model. It should always be up-to-date, i.e. show all existing model contents with their current labels, and don't show deleted contents.
My processor listens to changes in the original model. If it detects a bad modelling style (e.g. using a GenericReferenceConcept
in places where a MethodReferenceConcept
would work), it automatically changes the original model.
Note that my processor listens to the same models it might change.
Ideally, my processor would also know about all changes that happened while my processor was unavailable (not started yet, crashed, paused, ...).
I want to write model queries (e.g. in a REPL-style or a Jupyter notebook-style) that points to other models, and executes my queries against that those models. I want to do this to do impact analyses/analytics/etc.
Whenever I update a query or the models pointed to change, the query result gets updated as well. I should be able to refer to other queries from a query so I can compose them, and updates should cascade. The query editor should be aware of the languages of the models pointed to.
I have a language to describe business rules (or anything else with concrete semantics, i.e. an evaluator) with, and also a language to describe tests for business rules. I want to run those tests all the time using a processor. Whenever either the business rules or their tests change, I want the processor to re-run the tests and get notified when the tests results change. Ideally, these test results are shown in the projection of the tests and even the business rules themselves.
I have a language such that models in it can be evaluated/executed against some input, e.g. defined in a test. (See also previous scenario.) The evaluation/execution not only produces an end result, but it also produces a trace of the evaluation/execution so that I can debug it. I want the trace to show in conjunction with the evaluation/execution result, and I want it to be updated live.
Interpret an executable model and show the results in a graphical way, e.g. a graph.
Collection of scenarios we'd like to support with delta protocol