Open KRow opened 13 years ago
Any suggestion on how to improve that?
Sure, if you view the contract of the TestNG Ant task to be that it will run tests and generate results, then any case where results are not generated should result in a BuildException.
This leaves two solutions, throw the exception or generate the results even though an error occurred.
Both solutions seem reasonable. TestNG couldn't load a required class or ran out of memory and this prevented the tests from being run so fail the build. It's a TestNG failure not a failed test.
From the other point of view, the Test NG failure could be treated as a configuration issue. Conceptually loading a class could be considered as if it were part of a BeforeSuite configuration method. So like other configuration methods if it fails a testcase element could be placed in the results xml indicating the failure.
Generating the results with a failed testcase is probably closer to the current behavior and might be more configurable.
I haven't had time to look into the TestNG source so I'm speaking from a users perspective. One of these solutions might be more inline with the current internal behavior making it a better choice.
I intentionally made the "Can't find a class" scenario fatal: TestNG will just abort immediately. I think the ant task should reflect this. It looks like a trivial fix, if you're curious to do it yourself, the ant task is TestNGAntTask.java.
I looked at the code briefly but not long enough to write a fix. I'll try to take another look later today.
I think the same problem occurs when TestNG times out the entire test suite; it should probably cause Ant to fail the build in that case, but it doesn't.
When TestNG encounters an exception such as "org.testng.TestNGException: An error occurred while instantiating class ..." execution of the tests is halted and stack trace is output to the console. This works well when running the tests manually, the test suite quickly fails and reports the error rather than wasting time running a partial test suite.
However when the tests are being run automatically, for example by a continuous integration server, the stack trace is less obvious and no attention is called to the issue by a BuildException. This wouldn’t be an issue if test results were generated that contained at least a single failed testcase. After the tests are run the results of the various test suites can be aggregated and test results for the build can be published. If a failed result was output for the test suite that was unable to run this would be included in the published results and bring attention to the failure. Missing results are less obvious and are easily missed if the test result aggregation searches for result files as opposed to listing each expected result file individually.
The current result when one of numerous test suites fail, using Hudson as the CI Server in this case, is: a SUCCESSFUL build status from the Ant task, a stack trace written to the console log for the build, and aggregated test results with a lower test count than expected. This is a rather quiet failure in comparison a failed test or build being reported.