Closed MaximilianAlgehed closed 6 years ago
Hey @MaximilianAlgehed,
You could do the same as Speculate:
make update-test-model
-- I usually keep these tracked by git;make test
is run, the output of each example is tested against the recorded reference output;I am using a makefile to manage this. But I don't see a problem with doing this with cabal.
I do something similar on LeanCheck, FitSpec and Extrapolate. @barrucadu does something similar on CoCo
This has the nice advantage of recording the effect of changes on the output when you are careful enough to update the reference outputs before commiting. For example: https://github.com/rudymatela/speculate/commit/b1df5ef7bb964e249b38a182551456e41396df88
I may have mentioned this to you in person, so forgive me if I am repeating myself :-) Just wanted to let you know as I think this may be helpful.
PS: the scripts I use to manage the reference outputs on Speculate is admittedly overengineered. Maybe the scripts on LeanCheck and Extrapolate are a better reference if you ever want to implement something similar.
Yeah, that sounds like a good way to do it!
Hey @rudymatela,
That sounds like a nice way of doing it. Don't worry about repeating yourself, I'm not the best at recalling past conversations anyway.
It would be nice to integrate this with some form of continous integration like Travis or some other tool which runs builds and tests when commits are made.
It would be nice to integrate this with some form of continous integration like Travis or some other tool which runs builds and tests when commits are made.
Speculate and Extrapolate are under continuous integration on Travis. Tests comparing output with a reference are active there. So whenever output and reference do not match, travis reports it.
For a near minimal example of using Travis+Haskell, you can check hello-haskell.
@nick8325 Implemented something like this which can be found here: https://travis-ci.org/nick8325/quickspec
We need some form of test suite to make sure we catch silly bugs.