Open treeowl opened 3 years ago
I have seen this many times with processes that start a (ghc compiled Haskell) subprocesses. I don't have all the details available right now, but from what I remember, this happens when you try to write to a redirected/captured handle in the subprocess where the reading end has been closed in the parent process.
Something like:
hErr
for the subprocesse to use as stderr
. ^C
/ UserInterupt
, A does two things, in that order:
hErr
waitForProcess
on BUserInterupt
(assuming we started it with delegate_ctlc
, which I think is the default). The UserInterupt
exception is not explicitly handled by B and propagated to the default exception handler.stderr
.stderr
fails with some "broken pipe" exception as the reading end hErr
has been closed in the parent (GOTO 4).This results in an infinite loop: 4/5/4/5/4/5/4/5/...
I think it's unlikely that this is a bug in hapec
or hspec-hedgehog
. @treeowl do you remember how you invoked your tests?
If it was through cabal test
, or something, then it's probably worth looking at how they start subprocesses.
Specifically, in the parent only close any pipe reading ends after waitForProcess
concluded, so that the child process does not go into an infinite loop when trying to write to a "closed" stderr
.
I honestly have no idea whether this is a problem in
HSpec
,hedgehog
, orhspec-hedgehog
, so I figured I'd start here. I'm running a small test suite testing pure code (so the code being tested doesn't fork anything). Sometimes, the generated test cases are much too large (a bug in my test suite), causing execution to take longer than I want to wait. When I hit^C
, I get back to my shell immediately. Unfortunately, some background thread keeps going in the background, seemingly indefinitely, burning lots of CPU, and I have to kill it separately. Any guesses?