Open jonnyarnold opened 9 years ago
oooh. Given values accumulate. That's an interesting idea.
So one of the problems here is that we don't have a good way to shrink values. My generator functions (i.e. integer
) currently return a struct that contains a generator function as well as a shrinker function. The generator is called again with each iteration of the test to get random data, and the shrinker is (will be) called when an assertion fails, in order to find the simplest possible value for which the test fails for.
I don't know how to make it so custom types are easy for the user to create. I don't know if this is needed either.
I also don't know how to make these all equivalent.
def my_macro(x) do
# stuff...
end
my_macro([1, 2, 3])
vars = [1, 2, 3]
my_macro(vars)
@vars = [1, 2, 3]
my_macro(vars)
# etc...
Shrinking is an interesting idea, but I think it makes this kind of framework too complicated. Why wouldn't you define:
def integer(min = MIN_INT, max = MAX_INT)
[min, min + 1, round((min + max) / 2), max - 1, max]
end
Maybe I've missed some 'interesting' integers here, but surely a concrete failing test case is sufficient?
Shrinking is pretty much the only thing that makes Quickcheck worth using, otherwise it impossible to tell what the problem is because the data provided is so large. Generating random data is easy, making it easy for a person to understand is harder.
Your example doesn't really provide any advantage over manually creating test data.
I suppose the argument I'm making is: what advantages do randomly-generated data sets give?
I don't know an awful lot about QuickCheck; is it clever enough to shrink values to 'the odd numbers' if the assertion is x % 2 == 0
? It seems to me QuickCheck would only deal with out-of-range-type issues.
It can be as smart as we make it. I don't know what we'd want to do for ints.
I suppose the argument I'm making is: what advantages do randomly-generated data sets give?
It finds problems you didn't. That's it really.
Here's some examples -> https://hypothesis.readthedocs.org/en/latest/examples.html
As a general rule, you could shrink values via binary search?
If you have a list describing a domain:
[a_1, a_2, ..., a_n]
Then you sample this array to get your random values. If one fails, perform a binary search between index 0 and the index of the failing value to find the pass/fail boundary.
This works fine as long as:
Pretty neat idea. Open an issue for it?
Also, I've started doing C again, so this project is probably going to get neglected.
It's good as long as your assertion 'passes' up to a certain range and 'fails' beyond that. We'll get really weird things for the following:
def integer do
range(MIN_INT, MAX_INT)
end
def even?(i)
i % 2 == 0
end
describe "even?" do
given [i: integer] do
assert even?(i)
end
end
We'll probably get something like
"even?" given i = <BIG NUMBER> fails assertion even?(i)
In this case we might do something like:
even? assertion even?(i):
PASS when i = 124, 906, 11068, -1094, 0
FAIL when i = 1, -99, 4097, -11111, 90841
This has the benefit of not requiring any shrinking...
It's good as long as your assertion 'passes' up to a certain range and 'fails' beyond that.
This is why you use multiple shrinkers :)
As @lpil challenged me yesterday, here's a first draft at a test syntax that I think could be achieved.