HypothesisWorks / hypothesis

Hypothesis is a powerful, flexible, and easy to use library for property-based testing.
https://hypothesis.works
Other
7.4k stars 578 forks source link

Stable support for symbolic execution #3914

Open Zac-HD opened 3 months ago

Zac-HD commented 3 months ago

Following https://github.com/HypothesisWorks/hypothesis/issues/3086 and https://github.com/HypothesisWorks/hypothesis/pull/3806, you can pip install hypothesis[crosshair], load a settings profile with backend="crosshair", and Z3 will use the full power of an SMT solver to find bugs! ✨

...but seeing as this is a wildly ambitious research project, there are probably a lot of problems in our implementation, not just the code we're testing. That's why we marked the feature experimental; and this issue is to track such things so that we can eventually fix them.

Zac-HD commented 3 months ago

Run with pytest -Wignore to avoid the z3 deprecation warning:

from hypothesis import given, settings, strategies as st

@settings(backend="crosshair")
@given(st.text("abcdefg"))
def test_hits_internal_assert(x):
    assert set(x).issubset(set("abcdefg"))
pschanely commented 3 months ago

I'll start trying out hypothesis-jsonschema next!

Zac-HD commented 2 months ago

@pschanely I'm getting an exception when trying to serialize arguments for observability mode:

# Execute with `HYPOTHESIS_EXPERIMENTAL_OBSERVABILITY=1 pytest -Wignore t.py`
from hypothesis import given, settings, strategies as st

@settings(backend="crosshair", database=None, derandomize=True)
@given(st.integers())
def test_hits_internal_assert(x):
    assert x % 64 != 4

Initially I thought that this was because we're only calling the post-test-case hook when the test function raised an exception, but patching that (https://github.com/HypothesisWorks/hypothesis/compare/master...Zac-HD:hypothesis:post-tc-observability-hook) still results in the same error:

  File "python3.10/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type LazyIntSymbolicStr is not JSON serializable

Is there some reasonable way to support this? It seems possible-in-principle to serialize it out, once the object (or root nodes of the tree that it's derived from) have been materialized, but you'd have a better sense of that than I. Worst case, we can extend our internal to_jsonable() function to handle it somehow.

pschanely commented 2 months ago

Ah, so I think this testcase dictionary can contain values derived from symbolics: at least the "representation" and "arguments" keys? In general it's not OK to touch data derived from symbolics outside the per-run context manager; I'm not sure which solutions are feasible here. One weird thing to note about my plugin's post_test_case_hook - after the context manager exits, it can only produce realized values for the drawn primitives. But if you call it inside the per-test-case context manager, it can (deeply) realize any value. So if we could somehow call that on the JSON blob that we want to write out to the observability file, I think we'd be good. (I have no clue how hard that would be)

Aside: it looks to me like the "representation" string is computed before running the code under test, so it's good that it comes out symbolic. But even the generation of the string will force us into a variety of early decisions - crosshair with observability will behave very differently than without it. I assume not a big deal at this stage.

Zac-HD commented 2 months ago

Ah, so I think this testcase dictionary can contain values derived from symbolics: at least the "representation" and "arguments" keys?

Yep, that's right - the representation and arguments values are very directly derived from the arguments, and the features and metadata values may derive at least in part from the arguments too, via e.g. args to hypothesis.target(). The coverage (and timing) values only depend on which branches were executed, not the values, so I think they're concrete like everything else.

In general it's not OK to touch data derived from symbolics outside the per-run context manager; I'm not sure which solutions are feasible here. One weird thing to note about my plugin's post_test_case_hook - after the context manager exits, it can only produce realized values for the drawn primitives. But if you call it inside the per-test-case context manager, it can (deeply) realize any value. So if we could somehow call that on the JSON blob that we want to write out to the observability file, I think we'd be good. (I have no clue how hard that would be)

Rather than the json blob itself, I think we'll aim to deep-realize the various dictionaries that become values in the json blob - that's mostly just a matter of adding and moving some hook calls. I think your current hook implementation can do this, but if we use it this way we'll need to update expectations for other libraries implementing the hook (which is fine tbc).

It might also be useful to mark the end of the "region of interest" while the context is still open; I don't want to spend time exploring values which differ only in how we do post-test serialization!

Aside: it looks to me like the "representation" string is computed before running the code under test, so it's good that it comes out symbolic. But even the generation of the string will force us into a variety of early decisions - crosshair with observability will behave very differently than without it. I assume not a big deal at this stage.

We pre-compute this so that we can still show an accurate representation even if the test function mutates its arguments or crashes in some especially annoying way, but it seems reasonable to defer this on backends which define the post_test_case hook. On the other hand it seems like this doesn't intrinsically need to force any early decisions, if we don't touch the string until later?

pschanely commented 2 months ago

Rather than the json blob itself, I think we'll aim to deep-realize the various dictionaries that become values in the json blob - that's mostly just a matter of adding and moving some hook calls. I think your current hook implementation can do this, but if we use it this way we'll need to update expectations for other libraries implementing the hook (which is fine tbc).

It might also be useful to mark the end of the "region of interest" while the context is still open; I don't want to spend time exploring values which differ only in how we do post-test serialization!

This is a good point. Thinking about it again, I think the ideal interface would be something where I could give ancillary context managers for wherever you manipulate symbolics - and I'd make it so that this manager would not play a part in the path search tree if the main function has completed. We'd use this to guard both the construction of the testcase data and the JSON file write. FWIW, the construction of the representation string happens to work right now, but seemingly trivial changes on either side could cause that to break if I don't have the right interpreter hooks in place. I could look into this over the weekend if we wanted.

We pre-compute this so that we can still show an accurate representation even if the test function mutates its arguments or crashes in some especially annoying way, but it seems reasonable to defer this on backends which define the post_test_case hook. On the other hand it seems like this doesn't intrinsically need to force any early decisions, if we don't touch the string until later?

First step is probably just to get things not crashing. Longer term, I think observability is honestly pretty interesting from a crosshair perspective, but probably only if it doesn't change the path tree. That won't happen under the current setup; although we have a symbolic string, it's will likely have a realized length , so we're exploring paths with early decisions like "integers in the 100-199 range."

Zac-HD commented 2 months ago

I'm likely to dig into this (or support someone else to) at the PyCon sprints in May; so long as we're ready by then there's no rush for this weekend.

And yeah, the length thing makes sense as an implementation limitation. I guess doing a union of lengths just has too much overhead?

pschanely commented 2 months ago

And yeah, the length thing makes sense as an implementation limitation. I guess doing a union of lengths just has too much overhead?

Most of the meaningful decisions in CrossHair are about the tradeoff between making the solver work harder vs running more iterations. Sequence solving tanks the solver pretty fast, and it's largely avoided in the current implementation. In its ultimate form, CrossHair would have several strategies and be able to employ them adaptively (a single length, a bounded union of specific lengths, the full sequence solver, ...); but I'm not there yet.

Regardless, it would be foolish of me to try and guarantee that certain kinds of symbolic operations will never introduce a fork in the decision tree. 😄

pschanely commented 1 month ago

This is a good point. Thinking about it again, I think the ideal interface would be something where I could give ancillary context managers for wherever you manipulate symbolics - and I'd make it so that this manager would not play a part in the path search tree if the main function has completed. We'd use this to guard both the construction of the testcase data and the JSON file write.

I'm likely to dig into this (or support someone else to) at the PyCon sprints in May; so long as we're ready by then there's no rush for this weekend.

Ok, I'm not positive about whether this is really the right strategy, but in v0.0.4, I've added a post_test_case_context_manager() in addition to the existing per_test_case_context_manager(); my hope is that we can use this whenever we want to perform operations involving potential symbolics. (I am unclear on how feasible would be to employ on the hypothesis side though)

You cannot enter the post_test_case_context_manager inside the per_test_case_context_manager currently. I could permit this if we want, but I'd call out that decisions made in those cases will still grow the search tree.

fubuloubu commented 3 weeks ago

If I can make a small UX suggestion, it would be great if the choice of backend can be made by config variable (with pytest flag as a really nice to have) instead of having to set it with @settings e.g.

$ pytest ... --hypothesis-backend crosshair ...
# OR
$ HYPOTHESIS_BACKEND=crosshair pytest ...

Personally I have done a lot of work with both fuzzing and formal verification, and what works very well for me is to use fuzzing for prototyping property test cases (or otherwise making code changes) as it's fast/defined-timeline/quick feedback, and a great way to find easy bugs quickly, then pivot to a formal verification backend without changing the test cases to try and validate the properties in a much stronger way (which usually takes much longer to execute and might be done in CI)

Still seems early days for this feature, but as someone who will likely expose this to developers (who may not know how to use formal proving effectively themselves), I am very very excited for what it can be used for

Zac-HD commented 3 weeks ago

You can already do this for yourself using our settings profiles functionality and a small pytest plugin in conftest.py! We should consider upstream support of some kind once it's stable too 🙂

pschanely commented 2 weeks ago

@Zac-HD crosshair is finally on the latest z3! I think we may also need some sort of issue for figuring out what to do with settings; I think crosshair too easily runs afoul of HealthCheck.TOO_SLOW and deadline limits.

@fubuloubu I've added an explicit recipe in the crosshair-hypothesis readme for running all tests under crosshair. BTW, if you've tried it, I'd also love to hear from you directly about what is (and isn't) working for you!

tybug commented 2 weeks ago

Plausibly we could disable all timing-related health checks when running with other backends, or provide an api for backends to (1) opt out of health checks (2) modify them by changing their limits.

Zac-HD commented 2 weeks ago

Instead of more interaction between settings, can we just add those to the example configuration?

pschanely commented 2 weeks ago

Instead of more interaction between settings, can we just add those to the example configuration?

Yup, I think that's fine, especially as we're getting started.