At the moment, the implementation compiles a Hap AST to actions in the HapT monad, atop either IO (batch) or InputT IO (interactive). This is reasonably efficient for an interpreter, but still likely won’t scale. I don’t want to focus too much on optimisation just yet, but I do want to make sure I’m not painting myself into a corner when it comes to performance. As I see it, there are a few areas with lots of of low-hanging fruit:
Runtime Structure
At the moment, the compiler builds `IO` actions at runtime. It could instead compile to some IR and interpret that. This might be slower initially, because GHC invests heavily in optimising `IO`, but it would open the door to JIT/AOT/bytecode compilation. Such an IR would be relatively easy to emit for other managed runtimes like .NET or Java, if that seems desirable.
Data Representation
Take advantage of static typing where possible to remove type tags and unbox values. Having a fixed set of primitive types (#2) should offer a lot of opportunities for improving representations.
Reactivity Analysis
Most variables don’t participate in events, so they don’t need to be allocated to cells. I think it would be valuable to *both* give the programmer explicit control over reactivity, *and* analyse programs to determine when a binding doesn’t need to be reactive, and can thus be omitted from dependency tracking. Programmer control could take the form of separate binding forms for reactive and nonreactive variables (`var` vs. `let`? #4), or a way to annotate a binding as reactive or not. A good simple first pass for reactivity analysis would be to remove reactivity if a variable isn’t referenced from a listener condition; however, in interactive mode, the interpreter can’t know ahead of time whether the programmer will add an event depending on a binding, so either *everything* needs to be reactive or there needs to be a way to promote nonreactive bindings to reactive.
At the moment, the implementation compiles a Hap AST to actions in the
HapT
monad, atop eitherIO
(batch) orInputT IO
(interactive). This is reasonably efficient for an interpreter, but still likely won’t scale. I don’t want to focus too much on optimisation just yet, but I do want to make sure I’m not painting myself into a corner when it comes to performance. As I see it, there are a few areas with lots of of low-hanging fruit: