Most of the time, convolution will be fast and give very nice results. But there are situations where sampling makes more sense. In particular, if the whole expression consists of a bunch of algebraic combinations, then it will be much faster to just estimate the overall minX/maxX, compile the whole tree into a JS function, and take a few thousand samples. And then there are the cases where it makes sense to sample a particular subtree (e.g. for pointwise products), but do the rest of the work on XYShapes directly. So for any given treeNode, we need to:
Have some kind of heuristic for what strategy to use to evaluate the subtree.
If we decide to evaluate via sampling, provide a way to do this:
Easiest: compile the subtree into a Guesstimator string, let Guesstimator do the work. But this probably won't work, because we have functionality now that Guesstimator (probably?) doesn't have, like pointwise scaling etc.
Better: compile the subtree into a JS function, and pass that to Guesstimator's sampler directly.
Even better: compile the subtree into a JS function, and use a more sophisticated sampler. It might be relatively easy to lift code from WebPPL?
Most of the time, convolution will be fast and give very nice results. But there are situations where sampling makes more sense. In particular, if the whole expression consists of a bunch of algebraic combinations, then it will be much faster to just estimate the overall minX/maxX, compile the whole tree into a JS function, and take a few thousand samples. And then there are the cases where it makes sense to sample a particular subtree (e.g. for pointwise products), but do the rest of the work on XYShapes directly. So for any given treeNode, we need to: