When using act-tester-analyse, we're performing increasingly heavyweight analysis on the plan file, including reading in external files. It may be the case that, when tracking errors down, we invoke act-tester-analyse in several different ways in quick succession.
From both this, and a 'tools should do one thing and do it well' standpoint, I wonder if it makes sense to separate analysis into its own plan stage, serialise the analysis into JSON, and have all of the various query tools instead depend on that stage being present. This would then, later, let us split act-tester-analyse into different tools eg. for human-readable output, CSV output, and so on.
The only disadvantage I can think of, beside taking up infrastructure time that is probably not critical path at the moment, is bloating the already quite bloated plan files more on disk. (It won't change the amount of plan that goes between director and machine node, thankfully.)
When using
act-tester-analyse
, we're performing increasingly heavyweight analysis on the plan file, including reading in external files. It may be the case that, when tracking errors down, we invokeact-tester-analyse
in several different ways in quick succession.From both this, and a 'tools should do one thing and do it well' standpoint, I wonder if it makes sense to separate analysis into its own plan stage, serialise the analysis into JSON, and have all of the various query tools instead depend on that stage being present. This would then, later, let us split
act-tester-analyse
into different tools eg. for human-readable output, CSV output, and so on.The only disadvantage I can think of, beside taking up infrastructure time that is probably not critical path at the moment, is bloating the already quite bloated plan files more on disk. (It won't change the amount of plan that goes between director and machine node, thankfully.)