Open natefaubion opened 1 year ago
This isn't so simple. For example, pattern matching lifts all pattern match bodies into bound lambdas, so something like:
example = case _ of
Left _ ->
foo ...
Right _ ->
bar ...
is desugared as something like:
example x =
let
k0() = foo ...
k1() = bar ...
k2() = fail()
in
if x is Left then
k0()
else if x is Right then
k1()
else
k2()
Where inlining may then take over. The issue is at the point of evaluating the bodies (which are in a binding), there is no refinement information from the predicates. After such bodies are inlined, there may be opportunity to propagate refinement information, but this would need an additional rewrite queued. So I think this will need to happen as part of build
rather than eval
, with additional analysis tracking more detailed usages of such predicates.
Alternatively, some sort of dynamic context that follows control flow, rather than a lexical environment.
Given code like:
Where
foo
is inline always, one would hope that the branches would fuse together, yielding an optimized:But this doesn't currently happen, since
expr
is opaque. To make this work we would need to propagate refinement information in each branch on the opaque term, saying that in the Baz branch, any subsequentOpIsTag
operation onexpr
can be statically compared againstBaz
.One way to do this would be to change
SemConditional
. Currently it is:Which means that the branch is completely closed wrt evaluation, and so can't admit any new refinement information. We could change that so the branch is a function instead, taking some refinement:
I'm not sure if this information should just be tracked in a
Map
in theEnv
, or if thelocals
could potentially be updated in such a way thatderef
ing the binding in that branch can yield a term that fits the refinement.