Closed nicojs closed 5 years ago
I've removed the hacktoberfest label for now because this will take a while to research, review and develop.
So it should get 10 hacktoberfest labels, right? 😎
During my internship I also experimented with several testrunners. Both XUnit and NUnit have nuget packages available to run test projects from code. I couldn't get the XUnit one to run from a .NET Standard 2.0 project. But the NUnit runner seemed to work perfectly and super quick.
It offers the following advantages:
This would speed up Stryker.NET enormously, but of course only on NUnit test projects...
Would implementing a custom NUnit testrunner inside Stryker.NET be worth the work, or do we want to focus on the VSTest adapter only?
All test frameworks that want visual studio support need to implement the vstest testadapter for test discovery. This would most likely also include for example specflow as well as unit test frameworks.
Is anyone looking into this? I would be interested to contribute as this would ultimately reduce run time by order of magnitudes thanks to:
Yes, I am working on this. See https://github.com/stryker-mutator/stryker-net/tree/183-vstest-integration for working POC (windows only) of vstest integration. I am currently working on improving the vstest client library (provided by microsoft) as it does not support vstest.console.dll (xplat) at the moment. We cannot integrate vstest until that is fixed. See bug with vstest team: https://github.com/Microsoft/vstest/issues/1887 I have started the work here: https://github.com/Mobrockers/vstest/tree/1187_add_support_for_xpat_vstest_console_in_translationlayer But this still needs work, vstest.console.dll is not yet successfully started.
@dupdob You could look into if/how we could get a mapping for the relations between inidividual tests and mutants. If we could find out what tests should be run for a mutant, only that tests have to be run. We are not sure if we can do that with VsTest but if it is possible it will speed up Stryker very much! We currently have our eye on RunEventHandler.HandleTestRunStatsChange
. That way we can get a list of tests that have run during a testrun. Another way to go could be injecting a extra line with each mutant during mutating. That line could write the name/id of each test that executes that line. For that to happen we have to find out what individual test is active. Maybe we could also mutate the test project itself for that? We are not sure. You could do some research to find out more or give us another solution!
That line could write the name/id of each test that executes that line. For that to happen we have to find out what individual test is active
That sounds difficult to do. I think we should split these 2 things:
The advantage of splitting it into 2 parts is that you can already improve performance dramatically if you know the mutation coverage, because you will be able to skip running tests all together for mutants without coverage.
Another advantage is that you can already start with mutation coverage before we need to actually hook into the test runner. So @dupdob could already start with supporting "mutation coverage" in a separate PR.
This is similar to what we do with Stryker for javascript. We also let you configure it with coverageAnalysis
. We have 3 settings: 'off'
, 'all'
and 'perTest'
for incrementally more fine tuned performance gains. They only work if the test runner and test framework support it.
NB: We're very interested in the solution for "mutation coverage" at the JavaScript side, as we're still using istanbul which we want to replace with our own mutation coverage.
This is more or less what I had in mind. I am contemplating working on a demonstrator using the nunit portable agent. This would require backporting it to vstest later on. Here is a list of intermediate step I could work on:
Step 1 requires to able to access coverage results from Stryker Step 2 looks simple Step 3 requires to control which tests are run Step 4 require analysis of coverage data to build strategy
None of those steps require using an in process test runner from a theorical standpoint, but may prove too difficult to achieve without one. That was this research is about
My plan was as a next step after getting vstest integration working to investigate using a custom data collector to hook into the vstest process. See: https://github.com/Microsoft/vstest/blob/master/test/TestAssets/OutOfProcDataCollector/SampleDataCollector.cs
Using this we might be able to solve all the above problems. This could be researched separate from vstest integration in theory.
The DataCollector looks very promising indeed! If we may believe their documentation it allows us to execute some code between each individual testcase. If we could insert a line in each mutant like:
File.Write("C://RootDir/MyProject/MutantMapping.log", $"{mutantId}");
And after each testcase read that file. We could know what mutants were active during that testcase. We could maybe use the Environment too for this? If we can access the test process Environment from within the DataCollector of course.
I would fear impact on performance. I am pretty sure writing to a file on each invocation would slow some tests to a crawl
We can collect the result in-memory and write once only at the end of the testrun. We need to do this unless we can figure out another way to get the data out of the datacollector and into stryker.
Note that this only has to happen during the initial testrun. All testruns after that can skip this since the mapping is already known then. The initial testrun would look a bit like this:
This would indeed have great impact on the initial testrun. But the mapping will have much greater positive impact on Stykers performance.
Also like @Mobrockers said we could try to keep the mutant 'ids' in memory unit the end of the testcase and then write them once.
This would indeed have great impact on the initial testrun. But the mapping will have much greater positive impact on Stykers performance.
Premature optimization is the root of all evil.
This is especially true for mutation testing. The best thing you can do is create a couple of performance tests and run them each night (small, medium, large project). That's the only way you know for sure something has "great impact"
I will of course execute performance related tests. I was expressing a fear regarding performance. I think that integration strategy will face several lifecycle/timing related constraints that will drive the actual implementation. But we are not there yet.
For the time being, my focus is on updating Stryker workflow to execute a test run with mutated code, no active mutant and coverage activated, i.e. what I identified as step (1) 😉.
I will look into namedpipe to create an IPC channel between stryker and dotnest test. it will allow real time interaction, better than report parsing after a test pass
PR to add vstest.console.dll support to vstest client library has been submitted: https://github.com/Microsoft/vstest/pull/1893
The change worked on my system to start vstest using dotnet from stryker and tests were discovered and run by stryker through vstest.
Great news
Some update: I have namedpipes running smoothly and stable, after a few fights. I have reached maturity level 2: (my) Stryker now assumes non-covered mutants are survivors hence do not test them.
Code available here: https://github.com/dupdob/stryker-net/tree/mutant_coverage
Implemented by #319
Next steps can be picked up in #388
By far the next biggest performance improvement is integrating with the test runner. Right now, we support the
dotnet test
command. This spins up a new dotnet process. There is a lot of overhead here before dotnet even starts executing the tests:It would be great if we can host the child process ourselves. In this process we can load everything once and reuse that process to run tests (each time with a different ACTIVE_MUTATION environment variable). Having the overhead just once will dramatically improve performance.
In Stryker (for JS) this is done with a test runner plugin system. The command test runner is just one of the available implementations. Other examples are:
MochaTestRunner
(stryker-mocha-runner) and theJestTestRunner
(stryker-jest-runner). Something similar here would be great to have. Not sure if it needs to be it's own separate dependency.If you open a solution in Visual Studio, you can see the tests on the left. It doesn't matter if they are implemented in MSTest, NUnit or XUnit. This involves some kind of TestAdapter interface. Maybe we can leverage this api as well, so we only need to implement it once, instead of for each test runner.