Open liquidev opened 2 months ago
I've been thinking over which costs are the most useful from the user perspective, and here's an overview.
We consider the case where the brush grows organically rather than is pasted in all at once. Therefore we're not concerned with the case where earlier phases fail and prevent later phases from executing.
I think the following metrics should be displayed to the user:
The following metrics may be useful, but we won't show them because it's kind of hard to communicate what they're about, and they're hard to implement efficiently. Your script will simply fail if any of these are exhausted - which most likely will happen due to a bug.
The following metrics will not be displayed until exhausted, because they mostly describe the same thing: source code size. I presume out of all code size-related metrics, AST nodes will reach their limit first.
+
65536 times, I believe the parser would emit an error node for every one of those, which would mean parser events get exhausted first.It doesn't feel very useful to monitor the following metrics proactively:
The following metrics are related to the current renderer, which I plan to rework into something much simpler - such that these probably aren't going to be needed.
I'm still debating bulk memory. It'd be nice to merge them into refs somehow, but I don't see how we could meaningfully do that...
Once we have a live brush preview #36, it would be nice to display some gauges to the user to visualize the cost of their brush and how close they are to the VM's limits. Right now if a brush runs out of fuel, or recurs too far, or exhausts any resource, it will fail suddenly with an error message. Monitoring these limited resources actively with gauges would be a much nicer experience.