Apalache currently implements a version of profiling, which allows users to visualize parts of the specification which result in a large number of SMT constraints (and typically, longer runtimes).
While helpful, it cannot be used as part of a quick-iteration cycle, because it is tied to actually running the model checker.
We should implement a static analysis, that preemptively computes costs: Since we know, for each encoding, how many constraints are introduced by each rewriting rule, we can implement a cost estimator for each TLA+ operator, and a pass which aggregates these costs without actually performing model-checking. These aggregates can then be made into a heatmap, like the ones obtained from using the --smtprof flag and heatmap.py.
This could be run either as a standalone mode ($APA profile Spec.tla), or simply as additional output to the standard model-checking mode.
Apalache currently implements a version of profiling, which allows users to visualize parts of the specification which result in a large number of SMT constraints (and typically, longer runtimes). While helpful, it cannot be used as part of a quick-iteration cycle, because it is tied to actually running the model checker.
We should implement a static analysis, that preemptively computes costs: Since we know, for each encoding, how many constraints are introduced by each rewriting rule, we can implement a cost estimator for each TLA+ operator, and a pass which aggregates these costs without actually performing model-checking. These aggregates can then be made into a heatmap, like the ones obtained from using the
--smtprof
flag andheatmap.py
. This could be run either as a standalone mode ($APA profile Spec.tla
), or simply as additional output to the standard model-checking mode.