Closed pvlakshm closed 6 years ago
Hey @aanurca . If you are accessing a shared resource like logging to a file, then you may want to add an extension method to your logger class that wraps the log call in a lock like this:
// ... static extension calss def here
static object _fileLock = new object();
lock(_fileLock){
// ... write to log here
}
There are all kinds of semaphore and locking classes to lock across processes and threads as well, if you look into it you'll find a lot of examples.
This is now available as a Beta on NuGet. Framework: https://www.nuget.org/packages/MSTest.TestFramework/1.3.0-beta2 Adapter: https://www.nuget.org/packages/MSTest.TestAdapter/1.3.0-beta2
Will blog about it soon.
I hit the parallelism issue while writing an application and used this solution by taking the aforementioned, prerelease nuget pkgs. The parallelism works as explained (with the appropriate scope in run settings file). I am, however running into a problem where using ClassInitialize attribute is throwing this particular error while running the tests -
Result Message: Method E2ECustomerSimTests.DSATests.Init has wrong signature. Parameter 1 should be of type Microsoft.VisualStudio.TestTools.UnitTesting.TestContext.
Note that I cannot control the version of Microsoft.VisualStudio.TestTools.UnitTesting class because that comes from the aforementioned TestFramework pkg.
My class init method looks like the following (and, as you can see it has the correct syntax) -
[ClassInitialize()]
public static async void ClassInit(TestContext tc)
{
// random code
// Wait for DE to receive the entities created
bool proceedToVerifyInDE = await WaitAndPollDE(sdkHelper);
if (!proceedToVerifyInDE)
throw new Exception("Could not find the campaign creatred in WaitAndPollDE within appropriate time");
}
Nuget version used for TestAdapter and TestFramework - 1.3.0-build-20180116-01 Nuget repository used - https://dotnet.myget.org/F/mstestv2/api/v3/index.json
Any help on fixing this issue? (Happy to provide any other info that you need.)
Talked offline with Akanksha and apparently changing method return type from 'void' to 'Task' did the trick.
Wow! It's like the SAS of testing "Sweet And Simple". Great job guys.
Just wanne share some results.
Solution with 1172 tests (all passing)
Mstest V1 test run: 0:12:33 / all tests passing
Mstest V1 test run with "Run tests in parallel" test explorer setting: 08:17 / all tests passing
I already split the biggest test project in 3 smaller tests projects because the parallel execution in on assembly level only.
Mstest V2 1.30-beta2 (without runsettings): 09:38 / all tests passing
Mstest V2 1.30-beta2 (with 4 workers): 0:02:27 / ~ 100 tests failing.
Wow. That's amazing.
Mstest V2 1.30-beta2 (with 56 workers): 0:02:03 / ~ 100 tests failing.
https://github.com/Microsoft/testfx/issues/26
Awsome work.
Awsome work.
It's only truly awesome when you leave out the "e". It's the same principle behind thicc and succ really.
Is there a way to opt in only certain test classes within an assembly and not the others? We have a few long running functional tests that we want to run in parallel. However, the classes in the rest of the assembly are not yet ready for parallelism. Right now, I will have to configure a separate test task in VSTS for the functional ones and specify a different runsettings file from the rest of the tests. Is there another way? Currently, we find tests in all assemblies by using regex matching *test.dll for example in the VSTS test task.
You can decorate some tests with [TestCategory("Functional")]
and use test filter in test task to run these or those. But still will have to have 2 tasks, one to include and another to eluded target tests.
@abatishchev Thanks for the suggestion. All of these are actually marked Functional. Perhaps, the long running ones can also be marked as RunInParallel and then we can target them specifically. I was hoping there was a ParallelizationStrategy: Optin/OptOut in the runsettings and we could have used OptIn. But setting up a new VSTS task shouldn't be that hard. Thanks again.
Hi @pvlakshm and team, this is so cool. I have been waiting on this feature all my life. I had problems at the beginning migrating to MSTest v2, but eventually i figured out that I needed to remove every reference to QualityTools and now it is working fine.
Works smoothly on TFS. Sorry the editing on this post.
Hi, I got some feedback.
I want to be able to mark some tests as dependent on each other. And what that means is that the dependent tests will not run in parallel, but they can run in parallel with the rest of the tests.
A better explanation, given the next tests:
I want all the tests to run in parallel, but if testC is running then testA will not run until testC is complete.
Does it makes sense?
Why your tests depends on each other? Make them independent and isolated. If needed, call dependencies explicitly (make them private methods, not public test methods).
@abatishchev They are dependent because they both are UI tests that send an email to a specific test account, and I need to verify each email, but if they run at the same time, and 2 emails are sent, it is really messy to identify which email belong to each test. So I want to run one first, then the other one.
The example above is just an example, they are more complicated scenarios of tests using the same test data and I think someone else might face a related issue.
Ok, then see the discussion/workaround above: how to decorate tests the way that some run in parallel, some not. P.S. I'm just a user with very similar requirements/issues, not a dev on the MsTest team.
@abatishchev The workaround works fine, but all the tests decorated to not run in parallel will be executed one at the time and that add extra time to the execution. I want them to be in parallel, just some tests to run after others.
You can have different categories, like Parallel 1,2,3, and run a set of tests in parallel, then another, then another, then everything else. Again yet another workaround but it works for us.
I have recently moved our framework from mstest to mstestv2.0 using dotnetcore and observing few issues when running tests in parallel. If i execute single test i am able to connect to DB for my test data and get the data but when i run two tests in parallel i am not getting any data. Could anyone let me know what could be the issue. I am using sqldataadapter. I have been using datarow in mstest but mstest2.0 doesnot have any data row. Could this be an issue or is this related to any Sql implementation?
Most likely it's your code's issues. We're making numerous async HTTP calls from tests and all works just fine. Post a question on Stack Overflow and provide a link here.
Thanks for the update. I was able to fix it. I am new to .netcore and when i run the tessts in parallel using .netcore it is not taking two tests Any idea how we can identify the number of test cases i trigerred as i have some predefined things which we needs to run in initialize method. I have ran two test cases but it is only taking one tests in to initialize method
From the RFC it isn't clear whether a single test method with multiple DataRow attributes will be parallelized. My own testing right now seems to indicate that they are not parallelized.
Being able to parallelize over DataRow attributes would help me a great deal with making some of my integration tests more convenient to run.
I realize I can refactor so that the test method itself uses Parallel.ForEach (or similar) to explicitly parallelize over the test cases I have, but then I'd lose the separate-outcome-per-case display in Test Explorer. (I'd certainly consider doing this anyway if my integration test suite grows much bigger.)
I have clarified this in the RFC - indeed, parallelizing over DataRow attributes is not supported. Thank you for bringing it up.
This has been shipping since v1.1.3.0 Beta, and went RTM in v1.3.0. Accordingly, closing this issue.
Cooooool
I recommend moving to v1.3.1. It has the fix for AppDomain creation should honor runsettings.
Description
Enable tests runs to complete faster by allowing tests within a single assembly to execute in parallel.
Please see RFC here: 004-In-Assembly-Parallel-Execution