This PR introduces a new structure for ExecutionGraph.
Originally the ExecutionGraph contained thunk nodes which were keyed by their hash, and dependencies were also expressed as hashes. This meant that when a reduction was applied, all ancestral node would technically have their hashes change, and their keys change, and thus would require a change to all references to them. An example:
A
/ \
B C
\ /
D
Say that D becomes D', with a new hash. Then B and C must also be updated, as must A.
This sort of pervasive global response to every local modification is, of course, unreasonable, and not necessary to solve the problem.
To avoid global updates, the old implementation had a lazy update strategy which assumed bottom-up execution of the graph.
@sadjad and I have been planning to add support for non-bottom-up execution strategies, e.g. to support such behaviors as "short-circuiting" thunks, which return whenever any one of their dependencies does. Note that these kinds of strategies break the assumptions of the original graph implementation's update policy, and are in general harder to write correct lazy update strategies for.
Thus, we've designed this new ExecutionGraph to allow for a more general lazy update strategy, ultimately facilitating interesting kinds of control flow.
In this graph, when thunks are inserted they are assigned a ComputationId that will never change. As the node is reduced, new thunks that it reduces to will be assigned that same ComputationId, and ultimately the irreducible value of the thunk will be assigned that ID as well. Thus, ComputationIds serve as stable names for computations that do not change during the reduction process. By expressing dependencies as ComputationIds, dependencies become more stable during the update process. In particular, this (mostly) separates the process of hash updating from the process of graph structure change. This make writing a correct lazy hash-update system much easier.
A final note: sometimes two computations converge and end up with the same cache. In this case we implement what is essentially an optimization by linking one to the other.
This PR introduces a new structure for
ExecutionGraph
.Originally the
ExecutionGraph
contained thunk nodes which were keyed by their hash, and dependencies were also expressed as hashes. This meant that when a reduction was applied, all ancestral node would technically have their hashes change, and their keys change, and thus would require a change to all references to them. An example:Say that
D
becomesD'
, with a new hash. ThenB
andC
must also be updated, as mustA
.This sort of pervasive global response to every local modification is, of course, unreasonable, and not necessary to solve the problem.
To avoid global updates, the old implementation had a lazy update strategy which assumed bottom-up execution of the graph.
@sadjad and I have been planning to add support for non-bottom-up execution strategies, e.g. to support such behaviors as "short-circuiting" thunks, which return whenever any one of their dependencies does. Note that these kinds of strategies break the assumptions of the original graph implementation's update policy, and are in general harder to write correct lazy update strategies for.
Thus, we've designed this new
ExecutionGraph
to allow for a more general lazy update strategy, ultimately facilitating interesting kinds of control flow.In this graph, when thunks are inserted they are assigned a
ComputationId
that will never change. As the node is reduced, new thunks that it reduces to will be assigned that sameComputationId
, and ultimately the irreducible value of the thunk will be assigned that ID as well. Thus,ComputationId
s serve as stable names for computations that do not change during the reduction process. By expressing dependencies asComputationId
s, dependencies become more stable during the update process. In particular, this (mostly) separates the process of hash updating from the process of graph structure change. This make writing a correct lazy hash-update system much easier.A final note: sometimes two computations converge and end up with the same cache. In this case we implement what is essentially an optimization by linking one to the other.
Major credits to @sadjad for doing this with me.