xunit / visualstudio.xunit

VSTest runner for xUnit.net (for Visual Studio Test Explorer and dotnet test)
https://xunit.net/
Other
146 stars 81 forks source link

Can `could not find dependent assembly xunit.core` fail instead of skip? #369

Closed bradwilson closed 9 months ago

bradwilson commented 1 year ago

Discussed in https://github.com/xunit/xunit/discussions/2706

Originally posted by **lawrence-laz** April 5, 2023 I am running Xunit test project with the following versions: ```xml all runtime; build; native; contentfiles; analyzers; buildtransitive runtime; build; native; contentfiles; analyzers; buildtransitive all ``` in Azure DevOps with the following config: ```yml - task: VSTest@2 displayName: Run Automated Tests inputs: testSelector: testAssemblies testAssemblyVer2: | **\*IntegrationTests.dll **\*UnitTests.dll !**\obj\** searchFolder: $(System.DefaultWorkingDirectory) runTestsInIsolation: true platform: ${{ parameters.buildPlatform }} otherConsoleOptions: '/Logger:trx /Enablecodecoverage' ``` We had a problem where this one test project was being skipped with the following message: ``` [xUnit.net 00:00:33.44] Skipping: ***.IntegrationTests (could not find dependent assembly 'xunit.core, Version=2.4.2') ``` I came across this by chance and have fixed the issue, but I would like for Xunit to fail the test run instead of skipping it, the next time this happens, so that I could be aware that part of the tests are no longer running in the pipeline. Can this be configured somehow as it is? If not, could such setting be considered as an addition to Xunit?
bradwilson commented 1 year ago

Taking a look into the code where this happens, it's during test discovery. There is no "failing" mechanism here, since nothing is being run yet:

https://github.com/xunit/visualstudio.xunit/blob/86c0e57f3d8458b8b0f5e436f732dd9546d563dd/src/xunit.runner.visualstudio/VsTestRunner.cs#L320-L323

I'm not yet sure whether such a failure could be raised to VSTest when there's nothing being run.

lawrence-laz commented 1 year ago

I'm not yet sure whether such a failure could be raised to VSTest when there's nothing being run.

I see that we are catching an exception now. Could we throw it instead? Or maybe store it in some collection to be re-thrown at a later time as an AggregateException?

Throwing would probably return a non-0 value, if they are not being caught at vstest level, which would fail the pipeline.

bradwilson commented 1 year ago

The problem with throwing is that, in theory, we could be asked to scan multiple test assemblies at the same time to do simultaneous discovery (imagine the case where you just built your whole solution, and pop open Test Explorer). Throwing because one of those failed may artificially abort the scanning process rather than continuing to scan other assemblies (which may not be broken).

The only way to stash something would be to return a single test case which would later try to be run and fail. Of course, that would give rather strange output to the user in Test Explorer, as instead of finding all their tests, they would only see a single test with perhaps a name that indicated what went wrong, which they would then have to run to see the actual problem. It very much delays the reporting in an unexpected and perhaps not-very-useful way.

I'd really like to understand exactly how you got into this situation to help me better understand whether this is a common behavior that's worth all of these gymnastics and usability issues.

lawrence-laz commented 1 year ago

The problem with throwing is that, in theory, we could be asked to scan multiple test assemblies at the same time to do simultaneous discovery (imagine the case where you just built your whole solution, and pop open Test Explorer). Throwing because one of those failed may artificially abort the scanning process rather than continuing to scan other assemblies (which may not be broken).

Hm, makes sense in the case of local environment and test explorer. But when tests are being ran in an automated pipeline I feel like this should fail, as it's the goal of the pipeline to ensure quality and this slips through unnoticed way too easy. Maybe there could be a "strict-mode" parameter or something like that, that could be an opt-in to use in an automated pipeline?

I'd really like to understand exactly how you got into this situation to help me better understand whether this is a common behavior that's worth all of these gymnastics and usability issues.

I don't have the exact steps on how we got into this right now, but the gist of it is that we have a big codebase with multiple test projects, they have some shared common dependencies and we were introducing Verify.Xunit to some of them. This involved some conflicting Xunit dependencies that ultimately lead into some test projects running fine and some failing with that error in the original report.

bradwilson commented 9 months ago

Thinking on this more, there's no way to effectively do what you're asking.

In particular, you're missing xunit.core, which is required to be able to run tests. Attempting to return a test to be run to raise this as an error would require xunit.core to be in place to run that failing test. You're in a Catch-22: you can't run your tests because of a lack of xunit.core, and we can run a "fake" test to say you don't have xunit.core because you need xunit.core even to run that fake test.

As such, I'm closing this as Won't Fix.