Note that these three nodes are indistinguishable. Indeed, once we optimize the DFA:
dfa = reduce_nodes(dfa)
We get one with only one node:
But look at the edges! Clearly, the edges are now indistinguishable! Why are they not collapsed to a single edge?
Briefly, what it did before was:
After realizing nodes 1, 2 and 3 can be represented by the single node {1, 2, 3}, it needs to add edges. Since the nodes are indistinguishable, they must have equivalent edges, so that means the new edges are simply the edges of any node in {1, 2, 3}.
So it took a random node, e.g. 1, and duplicated its edges - this time to itself since nodes 2 and 3 are the same as 1.
However, it didn't take into account that some nodes even could be collapsed, because they point to the same child. For example, the edge from 1 to 2 is the "same" edge as that from 1 to 3, because it has the same actions, preconditions, source and destination.
Now, it checks if an equivalent edge already exists before adding a second one. If an equivalent edge does exist, it simply unions the edge labels
This is best illustrated with an example: First, create this DFA:
Then visualise it:
Note that these three nodes are indistinguishable. Indeed, once we optimize the DFA:
We get one with only one node: But look at the edges! Clearly, the edges are now indistinguishable! Why are they not collapsed to a single edge? Briefly, what it did before was:
Now, it checks if an equivalent edge already exists before adding a second one. If an equivalent edge does exist, it simply unions the edge labels