Open raghur opened 8 years ago
Sounds good! Probably not super high priority though (especially with attempts to move over to .Net Core by @BrainCrumbz)
Not totally convinced on this one. Some ideas behind current approach, just to set some starting point:
I'm not saying it's totally wrong, but there is a rationale behind current approach, and we should check that another approach is still coherent.
BTW currently there's a set of tests that just check this scenario and similar ones:
NSpecSpecs/describe_RunningSpecs/Exceptions/when_<name-your-favorite-hook>_contains_exception.cs
Just my two cents. If a before_all
/before_each
fails, the results of substructures are suspect (whether they pass or not). Thinking back to the code, it definitely is more complex to implement this proposed behavior (right now each it/specify
if treated in isolation, there would have to be some historical information of whether before_each/all
failed).
why should they run when there's a broken after all?
Good point, same issue here. Once an after_all
/after_each
fail all child substructures are suspect. I thing it's still kinda low/minor annoyance. That, and it's kinda tricky to implement (I think).
Another thought. Seeing how an it/specify
fails (because of a before
/after
failure) may give insights on how to fix it. Not sure how rspec handles this (worth looking into @raghur).
here's Rspec's behavior -https://www.relishapp.com/rspec/rspec-core/v/3-5/docs/hooks/before-and-after-hooks#failure-in-%60before(:context)%60-block
Interestingly, after_all failures do not affect the test outcome.
I don't necessarily agree with it though - a specific example - I have about 60 examples in 4 spec classes and the before all of the first spec resets system state. Sometimes when the before all fails, the test blunder through all the examples (even with failfast
) and I'd rather they just stop if before all doesn't work.
I understand though that if we're following rspec and want to leave things as is, that's fine with me.
One issue with reporting a problem only in console output (e.g. after_all failure) and not as failed testcases is that such output goes almost unnoticed under VS test runners. Not sure under Continuous Integration scenario, I guess if it just checks some returned value or some failed example count is the same: it goes unnoticed. We'd tend to avoid such undesired effects.
About failfast, it sounds like this should be a way to stop running examples if any has failed, also because of a broken before/before all/after/after all. What do you people think about it?
Something I've noticed with the console output is that even though the beforeEach
threw an exception, the color of the test remained green. It was odd to see all green passing tests, and then a wall of red text below it. If
all test hooks (before all/ before each/ ... /after each/ after all), both class-level and nested levels, take part into actually running an example, not just the example body. They all together form "the example".
then I would expect the example to be red with a message similar to if the assertion failed.
So, I've noticed that even when before_all fails, the examples in the context are run. This happens even with the
--failfast
flag on the nspecrunner.exe.Wouldn't it be better to either fail the entire context and not run the specs at all if before_all throws?
Currently, every example in the context is run and then reported as a failure - just feel this is too noisy. I think it's fair to expect that if before_all fails, there's no point in executing the examples.