Open ggrieco-tob opened 1 year ago
I had a look at the paper, thanks!
It seems that what the tool is doing is:
When traces are passed from the fuzzer to the symbolic
execution, the goal is to discover new transitions that fuzzing
had not previously found. Driller’s concolic execution engine
traces the input, following the same path that was taken by
the fuzzer. When Driller comes upon a conditional control
flow transfer, it checks if inverting that condition would result
in the discovery of a new state transition. If it will, Driller
produces an example input that will drive execution through
the new state transition instead of the original control flow.
This makes sense. Kinda interesting. I haven't worked on the concrete fuzzing part of HEVM, though, so I can't comment. But this would indeed be a valid way to make use of the symbolic execution capabilities of HEVM in the concrete fuzzing case. Essentially, analyzing the current set of explored paths, take the most interesting ones to give to the symbolic framework [1], and checking if it can be mutated into a path/coverage that hasn't been seen before. Here, [1] is:
1) The path that the input causes the application to take
was the first to trigger some state transition.
2) The path that the input causes the application to take
was the first to be placed into a unique “loop bucket”.
Which is basically the most obvious thing to do given how AFL works,, but it does indeed make sense.
Something that I found interesting in the paper, and may help with symbolic execution is that angr
[...] uses an index-based memory model in
which read addresses may be symbolic, but write address are
always concretized.
This approach, popularized by Mayhem,
is an important optimization to keep the analysis feasible:
if both read and write addresses were symbolic, a repeated
read and write using the same symbolic index would result
in a quadratic increase in symbolic constraints or, depending
on the implementation details of the symbolic execution
engine, the complexity of the stored symbolic expressions.
This is something worth thinking about -- doing a more "loose" approach, where we don't guarantee completeness but can explore more. It'd be not symbolic/concolic or concrete, but rather a loose-symbolic, but it could be good enough to find counterexamples when things explode.
I don't quite agree with the authors' view of "compartments":
A core intuition behind the design of Driller is that
applications process two different classes of user input:
general input, representing a wide range of values that can
be valid, and specific input, representing input that must
take on one of a select few possible values. Conceptually,
an application’s checks on the latter type of input split the
application into compartments. Execution flow moves between
compartments through checks against specific input, while,
within a compartment, the application processes general input.
I understand that it's easy to think of code as if they had some "wide" holes and some "tiny" holes that fuzzers need to pass through, but I think that's quite a simplistic view of executables. It probably helps in some cases, but I am not convinced. Perhaps I'm wrong.
I was wondering if hevm plans to support a symbolic execution mode where concrete inputs can be used to refine or prune the exploration.
This is possible today. Currently symbolic execution proceeds as follows:
summarization: all branches are explored, producing a term of type Expr End
. This is a term in our internal expression language that represents all possible executions of a given contract from a given starting state. The starting state can be entirely symboilc, entirely concrete, or any combination in between. This stage performs very few smt queries and is generally pretty fast (branches are explored in parallel). We are overeager here, and the resulting output may contain branches that are in actuality unreachable (due to unsatisfiable branch conditions), this is a per optimization that lets us skip a lot of smt queries (which are significantly more expensive than branch exploration).
simplification: we apply various domain specific simplifications and heuristics to simplify the expression from step one.
query dispatch: here we are searching for reachable branches, where some postcondition can violated (usually we are looking for assertion violations or failing ds-test assertions). Depending on the complexity of the postcondition, we can often determine statically (i.e. without calling an smt solver) whether a given branch can possibly violate the postcondtion. We dispatch smt queries to determine reachability for any branches that we cannot statically determine as safe. We dispatch one query per branch. Queries are executed in parallel.
Given the above framework, one approach to achieve what you want would be to insert a concretization step before simplification, where you would substitute symbolic variables in the summary with concrete values. Simplification should then reduce the term to it's most concrete form, giving less branches and simplier queries.
As a side note I would be very happy to discuss integration of hevm's symbolic features into echidna on a call if you're interested. I think there is a lot of potential there, and it's not something that we really have the bandwidth to explore at the EF right now.
Ended up including concolic execution in echidna in this PR. Had to hack it in, by feeding a custom fetcher
argument to interpret
to control which path it takes (and setting askSMTIter
to 0 so that the fetcher is used at every branch). When asked, the fetcher substitutes in concrete values for symbolic ones and then simplifies the expression and picks a branch to take. It ends up working pretty well, although it's definitely not an ideal solution; ideally this could eventually be worked into hevm itself.
I was wondering if hevm plans to support a symbolic execution mode where concrete inputs can be used to refine or prune the exploration. The typical example is what is explained in the Driller paper: