Open tgdwyer opened 1 week ago
add --output-file=FILENAME option to cause test outputs to be logged to FILENAME instead of stderr (the default)
I'm not completely opposed, but if we go that route I would want to spend some time to make sure that things are handled "uniformly". From a quick look at your patch the final result still goes to stderr
.
Just to be sure, you're not blocked by this, right? (I assume you can use a patched git dependency in stack.yaml
)
Redirection of standard error doesn't seem to work
I'm kind of puzzled why this would be the case. Do you have a repro (only using doctest
, no stack
, no cabal
).
More specifically, the reason we need this is we use doctests for the class worksheets in a Haskell course. we run doctest from a stack test, with an automated watcher, rerunning on file change. The problem is that when the student first starts each worksheet (a file with around a dozen exercise functions to implement and several test cases per exercise) they have a file with a whole lot of functions whose body starts off as
undefined
. The doctest for each such function fails and their terminal is swamped with output, they have to scroll back to the top to find the first failure - usually the one they are actively working on. If instead, we can send the output to a file, they can view just the head of the file and see the first failure or two on a single screen.
--fail-fast
option instead? Similar to what hspec
provides?hapec
for if it is more suitable for your use case?doctest
, a hybrid approach could still make sense in certain situations where you extract the usage examples with doctest
and then write a custom test driver that uses hspec
to verify the examples. The benefit of this approach is that it is faster and that you get more features (besides --fail-fast
e.g. also better diffs). The downside of this approach is that you don't have the guarantee that your examples work in ghci. It really depends on your exact requirements.Thanks for the feedback and suggestions. You are right that there are multiple work-arounds for my use case so I am not blocked on this pull-request. No need to merge it unless you also see a use.
A fail-fast option would probably be more appropriate for what I want. Would you consider such an addition to doctest?
As for hspec, I will look into it. Do you know of any examples of custom test-drivers that integrate doctest with hspec?
Thanks again!
A fail-fast option would probably be more appropriate for what I want. Would you consider such an addition to doctest?
Yes, I think that could be generally useful.
In case you want to take a stab at this, then please add test cases. Don't waste time with testing things manually with Debug.Trace
As for hspec, I will look into it. Do you know of any examples of custom test-drivers that integrate doctest with hspec?
Not exactly, but it's not particularly involved, basically something like:
shouldBe
combinator to verify that actual and expected match.it
and describe
.hspec
function.(Edit: To be clear, in this scenario you would generate a test module from the extracted examples. You would then use e.g. a Cabal test suite section to compile and run the test module.)
Technically I would be available for consulting if you had budget for it.
See #455
add --output-file=FILENAME option to cause test outputs to be logged to FILENAME instead of stderr (the default)
We need this because currently we get swamped by doctest outputs. Redirection of standard error doesn't seem to work because it's hCaptured.
More specifically, the reason we need this is we use doctests for the class worksheets in a Haskell course. we run doctest from a stack test, with an automated watcher, rerunning on file change. The problem is that when the student first starts each worksheet (a file with around a dozen exercise functions to implement and several test cases per exercise) they have a file with a whole lot of functions whose body starts off as
undefined
. The doctest for each such function fails and their terminal is swamped with output, they have to scroll back to the top to find the first failure - usually the one they are actively working on. If instead, we can send the output to a file, they can view just the head of the file and see the first failure or two on a single screen.It may sound trivial, but it would actually be a game changer for us. It's a big class - about 600 students this year, at a major Australian university and a wide range of skill and experience levels. The less we overwhelm them the better.