Closed berrutti closed 12 months ago
I looked up how other test frameworks do it. Nightwatch runs tests in alphabetical order apparently and will run them in the order they were specified in a test file. I recall selenium webdriver was similar. I wonder if testcafe is the same.
https://stackoverflow.com/questions/32703989/how-can-i-run-nightwatch-tests-in-a-specific-order
@charlieg-nuco, AFAIK Nightwatch and Selenium don't have a feature that is similar to the concurrency mode. In the regular mode, TestCafe executes tests preserving the order they were described in fixture files.
I have also run into cases where I would like something like this. My tests are usually grouped into suites (groups of files, each file with its own fixture). Each suite must run in order, but different suites can be run parallel to each other.
I think a good way to accomplish this would be to allow the user to specify atomic groups of files that can not be run in parallel. For example, if I say run all tests is tests/**
, but pass an atomic block of ['tests/a.js', 'tests/b.js']
, one browser window might run all of the tests in a.js
and then b.js
while the other browser windows run the remaining tests in whatever way is most efficient.
This might be a little awkward to specify on the command line API, but I think this is a more advanced case and it's fine to require that you use the node API if you need this functionality.
We have the same issue. We have an online file system of sorts and are testing the trash functionality. There are tests for deleting and restoring single files etc. and there is also a test to empty the entire trash. The problem is that without being able to control the order in a fixture, the empty entire trash test will randomly interfere with the other tests causing them to fail.
@alexschwantes thank you for sharing your use case. Currently, you can execute tests that can't work in the concurrency mode in a separate TestCafe session.
Thanks @AndreyBelym. We run a continuous integration server that then reports all the results together via a junit report plugin. Can the different testcafe sessions join the results or do you mean to just run testcafe twice separately? And if so then how do you prevent the first session from running a specific test that you want to run in the second session without hardcoding files that you want to run?
@alexschwantes I think you can try to use a programming interface to run tests separately. Please see the example in this answer on StackOverflow
In this case, you need to write some custom code to join the reporter results or implement your own reporter somehow. As for hardcoding the files, you can use the metadata mechanism, which is more suitable for this purpose.
Thanks @AlexKamaev. Yes I can see that writing my own harness with the programming interface to filter tests with the new metadata functionality and run them separately and then join the results could be a way to do it. But it gets messy very quickly. Even just updating the correct values in the junit results for number of tests run and the time it took etc. Additionally it doesn't solve the issue others have described where it would be useful to simply run tests sequentially in a fixture. But it does come part of the way.
@alexschwantes I understand that it would be better for you to have the functionality out of the box than using workarounds. However, I cannot give you any estimates on when the feature will be implemented, so at this moment I recommend you use the workaround described above to achieve the desired behavior.
I'm having the same issue. I know Protractor shards (their term for concurrency) tests at the file level. Ie. tests in a single file run in one browser instance, and in the order they are written in the file (though this can also be randomized). Having some kind of flag that would allow concurrency at the file level would likely solve this issue in Testcafe... 🤔
I agree with @qualityshepherd having concurrency at file level gives you better control and you are assured that tests are guaranteed to execute in certain order inside a file and multiple files are executed in parallel. So if user wants to execute tests serially, they just need to put it in a file
Is there any news on when this feature will approximately be available?
Any personal estimate may be misleading, so we cannot currently tell it at the moment. Once we get any results, we will post them in this thread.
Another use case for this proposal is when you have tests that include some long waiting, and you want these particular tests to run in parallel with the rest of your test suite.
On its own, being able to parallelize tests at the fixture or file level would greatly increase the stability of my test runs.
As @qualityshepherd noted, running several tests at the same time on the same feature can cause race conditions with test logic. Doing so also interacts very poorly with features that are resource-intensive or are otherwise not designed for significant parallel usage.
IMO, there are really only two problems with Testcafe; that keep it from easily being the best e2e solution:
beforeAll
which of course is THE BIGGIEHi guys,
We will keep this feature in mind when discussing future development. Thank you for your request.
@qualityshepherd what do you mean beforeAll
? We already have before
and after
hooks for testRun
.
@Aleksey28 we've been over this MANY times :D https://github.com/DevExpress/testcafe/issues/3517
I see that we already delivered the feature you requested in a different thread.
Those who also require to handle the beforeAll problem with context for a specific fixture
can refer to this package: testcafe-once-hook
.
Yeah... sadly, the once-hook doesn't work for me... but I appreciate the effort.
Could you please clarify whether testcafe-once-hook
does not work in your usage scenario or it does not meet your requirements?
I have never been able to get testcafe-once-hook
to work with page objects. No matter which way I try it it fails or throws:
Cannot implicitly resolve the test run in the context of which the test controller action should be executed. Use test function's 't' argument instead.
Do you have an open issue for this problem? If not, please create one and share a simple sample where we could reproduce the problematic behavior.
Hello everyone,
We have added a new fixture method to disable the global concurrency setting for a specific fixture in the latest release.
Are you requesting a feature or reporting a bug?
Requesting a feature
What is the current behavior?
You can execute tests concurrently without restrictions
What is the expected behavior?
Although you guys encourage people to make test cases atomic, there are instances where having multiple test cases that execute in a specific order is needed. For instance: first "Creating a user", then "Removing a user", etc. If you keep them in order in your fixture, it works well as long as you execute them in one browser, but if you execute them with concurrency, there's no guarantee that the tests will be executed in that specific order.
Can you give us an option to prevent test cafe from running some tests in parallel? I was thinking maybe a fixture option like fixture(Fixture name) .page(page) .parallel(false);
Or something along those lines. The idea is that that specific fixture should be run serially. If that's not feasible, maybe we can specify the order and prevent a test case X to enter the pool until some other test case Y is done, that way we can force test case order execution. I hope that makes sense.
Thanks a lot!
How would you reproduce the current behavior (if this is a bug)?
Provide the test code and the tested page URL (if applicable)
Tested page URL:
Test code
Specify your