Open fluffynuts opened 4 years ago
If memory serves, it works as it does due to a bug report some years back, which convinced us it should pass in this case.
😢
I just wrote a test with (currently) one item in the source, and, being a new test in a TDD cycle, I expected it to fail, only to find it passed... so I had to track down my faulty generator.
Perhaps failure is a bit to draconian (my preference, but may be annoying to the reporter of the prior issue). How about marking the test as skipped?
Some points to consider...
There can be multiple sources, so we need to be clear about whether the discussion applies to each source or to the aggregate result of all sources.
Sources generate tests, not just data. If no tests are generated, then there is nothing to fail or skip. In the distant past (V2) we generated a fake failing test, that ended up being both messy in the code and confusing to the users.
A test method with no cases is a bit like a method with no code. Also like a fixture with no tests. We consider those passing.
If you call your test a theory, you will get a failure if there is no passing case.
I'd vote for leaving the current behavior but giving the user some way to ask for a warning message if no cases are generated.
When using
[TestCaseSource(nameof(GeneratorMethod))]
, it would make sense to fail the test (or mark it as skipped) if the generator produces no inputs. At least don't show the test as passing :/