Closed machiev closed 7 years ago
It works as designed. :)
The test itself didn't fail and the data provider is not part of the test.
In fact, the test didn't run at all, so its status cannot be FAILURE
.
Maybe something could be improved but SKIP
looks logical IMO.
@cbeust Could you confirm?
@machiev BTW, why did you expect FAILURE
? Maybe we can find a compromise depending on your needs ;)
@juherr's interpretation is correct.
The right status here is a SKIP
, not a FAILURE
, which would indicate that something might be broken in your project, which is not the case (it's the test that's broken, not the functionality you're testing).
Hope this makes sense.
Guys, I see your point and understand that FAILURE
is not very suitable in this case as the test per se hasn't been executed. SKIP
however is something one can miss (some junior guys do it, believe me) as by default it doesn't fail a build. Additionally SKIP
is a state for an ignored test so when an error happens (in data provider) and a test gets skipped I cannot see the cause immediately.
Let's consider different cases o failures in tests. Having @BeforeMethod
which fails - e.g. throws an exception due to an incorrect resource URL - this setup method fails the test suite even though no tests were run. One mor example would be a test which fails in //given
section with an exception, a test will be FAILED
then.
I want to convey a message that every fail in the test infrastructure either in data provider or setup code should cause a failure. Test itself may be ignored but there should be a visible failure of that data provider.
Maybe we could have a third state like ERROR
to indicate that a test itself is neither PASSED
nor FAILED
but something in the test code is broken.
Anyway thanks for your comment and a great job. :+1:
True! Like configuration methods, a data provider method could be reported as a failure method in case of exceptions.
@cbeust WDYT?
Maybe we could have a third state like ERROR to indicate that a test itself is neither PASSED nor FAILED but something in the test code is broken.
INCONCLUSIVE
is already a general term which describes a test state and sounds better IMO.
@cbeust WDYT?
I'm not a fan of the idea of introducing a new state. It will introduce a lot of complications in the code base and also at the concept level. JUnit made a similar mistake when they introduced the distinction between failures and errors, and nobody could really remember the difference between those.
At the end of the day, when you run your tests, you look at the final result. If it's not SUCCESS, then you have some investigation to do, and this investigation should quickly lead you to whether the cause is a FAIL or a SKIP.
@cbeust And what about the first point and the rule "user issue = crash"? We already have it at loading time when user wrote bad tests but not yet at runtime. I can't say if it is a good or bad idea.
I know this is closed, but it's an issue that's caused problems for us. An exception in a dataprovider is the sort of thing we want to be notified about.
We have some tests that are allowed to skip, (e.x. tests that can only run in certain environments), so people often don't notice additional skips in the test report. We have hundreds of thousands of tests and they're usually run on automated CI servers, we really only notice test failures if they cause a build failure.
If there was a mode we could enable that cause the tests to fail when a dataprovider fails, we would definitely enable that. It would be even better if it propagated the exception in a highly visible way.
@lbergelson
If this is so important to you, then maybe a work around would be to calculate the values returned by the data provider in a test itself?
@Test
public void initDataProvider() {
this.data = calculateData();
}
@DataProvider
public Object[][] dp() {
return data;
}
@cbeust Thanks for the suggestion. That's a good idea. It is some extra boiler plate code to write for every dataprovider though. I wish there was a way to generated that test automatically for each dataprovider.
Understood. The idea is that data providers should never fail, so you should only do this for your (hopefully rare) data providers that can somehow fail.
In general, you should look at any test result that is not 100% SUCCESS as suspicious, whether the failure is caused by a FAIL or a SKIP.
TestNG Version
6.10
Expected behavior
Failure to create test data via DataProvider should mark test as failed.
Actual behavior
Test is marked as skipped.
Is the issue reproductible on runner?
Test case sample
The problem is in the code of Invoker class (line 1115):
I can see that
throwable
above is RuntimeException instance and the cause is set to NullPointerException. So basically all exceptions thrown in DataProvider are wrapped by RuntimeException.