Closed bradwilson closed 9 months ago
Taking a look into the code where this happens, it's during test discovery. There is no "failing" mechanism here, since nothing is being run yet:
I'm not yet sure whether such a failure could be raised to VSTest when there's nothing being run.
I'm not yet sure whether such a failure could be raised to VSTest when there's nothing being run.
I see that we are catching an exception now. Could we throw it instead? Or maybe store it in some collection to be re-thrown at a later time as an AggregateException?
Throwing would probably return a non-0 value, if they are not being caught at vstest level, which would fail the pipeline.
The problem with throwing is that, in theory, we could be asked to scan multiple test assemblies at the same time to do simultaneous discovery (imagine the case where you just built your whole solution, and pop open Test Explorer). Throwing because one of those failed may artificially abort the scanning process rather than continuing to scan other assemblies (which may not be broken).
The only way to stash something would be to return a single test case which would later try to be run and fail. Of course, that would give rather strange output to the user in Test Explorer, as instead of finding all their tests, they would only see a single test with perhaps a name that indicated what went wrong, which they would then have to run to see the actual problem. It very much delays the reporting in an unexpected and perhaps not-very-useful way.
I'd really like to understand exactly how you got into this situation to help me better understand whether this is a common behavior that's worth all of these gymnastics and usability issues.
The problem with throwing is that, in theory, we could be asked to scan multiple test assemblies at the same time to do simultaneous discovery (imagine the case where you just built your whole solution, and pop open Test Explorer). Throwing because one of those failed may artificially abort the scanning process rather than continuing to scan other assemblies (which may not be broken).
Hm, makes sense in the case of local environment and test explorer. But when tests are being ran in an automated pipeline I feel like this should fail, as it's the goal of the pipeline to ensure quality and this slips through unnoticed way too easy. Maybe there could be a "strict-mode" parameter or something like that, that could be an opt-in to use in an automated pipeline?
I'd really like to understand exactly how you got into this situation to help me better understand whether this is a common behavior that's worth all of these gymnastics and usability issues.
I don't have the exact steps on how we got into this right now, but the gist of it is that we have a big codebase with multiple test projects, they have some shared common dependencies and we were introducing Verify.Xunit to some of them. This involved some conflicting Xunit dependencies that ultimately lead into some test projects running fine and some failing with that error in the original report.
Thinking on this more, there's no way to effectively do what you're asking.
In particular, you're missing xunit.core
, which is required to be able to run tests. Attempting to return a test to be run to raise this as an error would require xunit.core
to be in place to run that failing test. You're in a Catch-22: you can't run your tests because of a lack of xunit.core
, and we can run a "fake" test to say you don't have xunit.core
because you need xunit.core
even to run that fake test.
As such, I'm closing this as Won't Fix.
Discussed in https://github.com/xunit/xunit/discussions/2706