Open moralismercatus opened 8 years ago
I wonder if we can begin restricting input size by gradually reducing the size of the input and checking it against unique tuples. Reduce until we lose a unique tuple. In the best case, this would reduce the number of unnecessary loops. In the worst, we would restrict the input too severely. It would require maintaining another repository. Traces with unique tuples. Each trace would need to maintain a size of input and a set of tuples. Then proceed with reducing input size for each. Then, and only then, do we start exploring derived traces once the input is minimized. In this way, we can start with maximum size input (automatically) for the trace. If that is too costly, it may be better to start with the minimum size trace and begin doubling.
While I'm thinking out loud, it may be worthwhile to create a simple tuple redundancy check for trace selection: if a trace contains a unique tuple, accept it, otherwise reject it as redundant. Clearly, while this simplistic algorithm would miss many paths, it may be worthwhile simply to see how effective it is versus the terrible current selection strategy imposed by others of FIFO.
Interrupts are still problematic with unique tuple calculations. Every time an interrupt is captured, it becomes a unique tuple (likely).
Inspired by AFL's whitepaper.
Prefer traces that have unique tuples (pairs). Take the following two traces:
A -> B -> C -> A -> B -> C -> A A -> B -> C -> B -> C -> A -> B
These will be flagged as duplicates even though they are clearly unique paths. The reason is the tuples (AB BC CA) are all the same. A unique path may contain (CB).