Open marcphilipp opened 8 years ago
The same is true for invalid custom display names supplied by the user, as described in #743.
I will therefore update the description of this issue to reflect the broader scope.
FYI: I have introduced a minimum set of Deliverables.
I am thinking this reminds me the approach we took for vintage to handle multiple assertion errors using a MultipleFailuresError.
We could be possible that we could store all validation errors in a Throwable - using a MultipleFailuresError when there is more than one, that would be carried over to the execution phase and when this is nonEmpty we would throw the exception instead of executing the test.
It might also be worth considering for this mechanism to track all exceptions that occur while processing a single discovered test and throw it in the execution phase.
This way instead of the whole process stopping, we could still process the rest of the tests that encountered no problems and only fail on single tests.
PS While I was writing, this @sbrannen changed the title and description and now it sound more like what I am writing here! :smiley:
@gaganis, thanks for sharing your ideas.
A few notes...
No need to use MultipleFailuresError
.
We're on Java 8 and can use suppressed exceptions.
Plus, we already have a ThrowableCollector
within the JUnit Jupiter engine that could potentially be used for this purpose as well.
The scope of this issue has been broadened in order to encompass the Platform as well.
Rather than using exceptions of any kind, having some kind of notification collector as a return or in/out parameter seems more flexible since any exception will stop an engine's discovery phase whereas many notifications are just warnings or pieces of information .
Rather than using exceptions of any kind, having some kind of notification collector as a return or in/out parameter seems more flexible since any exception will stop an engine's discovery phase whereas many notifications are just warnings or pieces of information .
Good point!
I'll change the title of this issue accordingly.
FYI: title and description have updated.
If this issue is addressed in time for 5.0 M6, we might want to move #949 to M6 as well.
Added deliverable regarding changes made in #971.
Updated Proposals based on discussions in #210.
Introduced Areas of Applicability section.
Overhauled and expanded Areas of Applicability section.
Result of a team discussion: Move to 5.1 Backlog. I will provide a PR for a possible solution and we can contiunue discussing it on the actual PR.
I have analyzed the scenarios mentioned within the issue. With the current state of the discovery mechanism in place, I can hardly see a way to solve this issue in some kind of general approach.
The places where the cases occur have very less in common and are spread throughout most of the discovery code base. In most situations, e.g. the lookup of the DisplayName
in org.junit.jupiter.engine.descriptor.JupiterTestDescriptor#determineDisplayName
one is not even able to create some kind of ErrorReportingTestDescriptor
as it would require reporting the newly created ErrorReportingTestDescriptor
without knowledge of the current context.
Therefore, a handling, in that case, would have to be implemented with the caller of the constructor of ClassTestDescriptor
and all MethodBasedTestDescriptor
s or better change those to factory methods and handle it within those. Still, this solves only one of the required cases of the following:
For me, I think most of them are much better handled during the test execution phase. Therefore, I wonder, if we might revisit the issues and think about moving the preconditions and verifications to the execution phase.
TestDescriptor
. We would rather have to keep the exception within the context and create an ErrorReportingTestDescriptor
for each TestDescriptor
found while the context holds the exception.TestDescriptor
turning into something that reports the exception for that particular TestEngine
. But again, this is more a topic for the execution phase, as it should stop the affected TestEngine
from executing any tests.I think we should split the topic into at least two topics:
1) Where applicable, move certain validations / exceptions from the discovery into the execution phase
2) Refactor / Simplify of the discovery resolvers to support a general mechanism to report discvoery errors in form of a ErrorReportingTestDescriptor
While 1 could also be solved with 2 being in-place, we still could provide a temporary solution for those cases.
@junit-team/junit-lambda: What do you think?
Introduced "invalid @Nested
class declaration" as mentioned in #1223.
Also related to #121.
@nicolaiparlog The CSV-related warnings/errors may apply here -- or in one of the related issues.
Also related to #876.
This issue has been automatically marked as stale because it has not had recent activity. Given the limited bandwidth of the team, it will be automatically closed if no further activity occurs. Thank you for your contribution.
Following up here on the issue posted at https://github.com/junit-team/junit5/issues/2659 (whoops):
This behavior can cause quite a headache and be very confusing. If I were to try to PR a minimal change which just did the below:
Map<String, JavaType> methodNameTypeMap = new Map()
for test in tests:
methodNameTypeMap.insert(test.name, test.method.returnType)
when "test results are reported":
for (testName, testFnType) in methodNameTypeMap.entries():
if testFnType != "void" then:
print(f"[WARNING]: Found test {testName} with type {testFnType}. JUnit only resolves tests of type void -- test was skipped")
How difficult would this change be to make as someone who didn't know the codebase? The reason why I ask is because this leads to scenarios like these:
This was a multi-day investigation for me =/ I don't want anyone else to have to go through that again. Can we make this better for everyone?
Should this also cover @Suite
? Its predicate class IsSuiteClass
seems to silently ignore the class if the predicate does not match, despite being explicitly annotated.
Yes, this should also apply to @Suite
classes. I'll add that to this issue's description.
Hi. This is big issue when writing unit tests in Kotlin. It's really easy to create unit test in Kotlin that implicitly returns a none void return type. JUnit then just silently fails to run any of your Kotlin unit test that have a none void return type.
As an example, in a codebase with about 800 unit tests, I found that about 20 of the unit tests were being silently ignored by JUnit 5 because of this issue.
Here a repo with a minimal reproducible example for easily hitting this bug in Kotlin:
https://github.com/simondean/kotlin-junit-run-blocking-issue/blob/main/src/test/kotlin/IssueTest.kt
@Test
fun `also will be silently skipped`() = runBlocking {
// This bug is not specific to Mockito but the bug is easily triggered by putting a verify() call at the end of a unit test
val value = "anything"
// Because verify() has a return value, so does the whole unit test, which causes JUnit to silently ignore the test (and any others like it in the codebase)
verify(value).toString()
}
See the following duplicate issues for more info/context:
Given this issue has been open for 5 years and it's arguably a more serious issue now that people are using Kotlin with JUnit, would it be worth boosting the priority of this issue? Thanks!
would it be worth boosting the priority of this issue? Thanks!
I've added the new
label to signal to the team that it needs to be reconsidered.
Thanks @sbrannen
As an example, in a codebase with about 800 unit tests, I found that about 20 of the unit tests were being silently ignored by JUnit 5 because of this issue.
@simondean Out of interest, how did you detect these? I'm looking to implement similar logic and am searching for prior art currently before implementing something bespoke.
This issue has been automatically marked as stale because it has not had recent activity. Given the limited bandwidth of the team, it will be automatically closed if no further activity occurs. Thank you for your contribution.
This issue has been automatically closed due to inactivity. If you have a good use case for this feature, please feel free to reopen the issue.
Hey folks, just wondering whether there was any movement on this?
I recently stumbled on the exact same issue as in https://github.com/junit-team/junit5/issues/1223 where we had junit4 tests being run in junit5 without vintage on the classpath, and the tests silently passed with 0 test cases run. Hoping that this could be better reported to the developer if they don't realise their tests didn't actually run
This is definitely on the short list for when I manage to set aside more time for JUnit but nothing concrete has happened, yet.
Overview
Validation errors (e.g., for invalid
@BeforeEach
method declarations) should not abort the entire discovery phase. Instead the discovery phase should continue, with the error tracked and reported during the execution phase.Areas of Applicability
@BeforeAll
,@AfterAll
,@BeforeEach
, and@AfterEach
method declarations@Test
,@TestFactory
,@RepeatedTest
, and@ParameterizedTest
method declarations (see #2244)@Nested
test class declarations (see #1223, #2717, and #1736)@Suite
class declarationsProposals
TestDescriptor
such as anAlwaysFailingTestDescriptor
,DeadOnArrivalTestDescriptor
, orErrorReportingTestDescriptor
.TestDescriptor
could then be thrown as an exception during the execution phase instead of executing the corresponding container or test.TestDescriptor
that signals an error that was encountered during the discovery phase.Launcher
into eachTestEngine
to report errors.Related Issues
121
210
743
750
835
876
949
971
1026
1074
1223
1944
2244
2311
2717
4125
Deliverables
TODO [#242]
within the code base.ClassTestDescriptor
.