Closed kentcdodds closed 1 year ago
@kentcdodds Thank you for the issue!
Quick fix aka band-aid
To ensure that each test file has a properly registered afterEach()
hook, you should call test.afterEach()
directly from the test file instead of calling it in the shared helper playwright-utils.ts
.
For example:
// playwright-utils.ts
// ...
export const afterEachHook = async () => {
await prisma.user.deleteMany({
where: { id: { in: Array.from(insertedUsers) } },
})
insertedUsers.clear()
};
// notes.test.ts
import { loginPage, afterEachHook } from '#tests/playwright-utils.ts'
test.afterEach(afterEachHook);
// ...
Explanation
playwright-utils
and calls test.afterEach()
with the hook body. Playwright registers this hook for the file.playwright-utils
. However, this time, node does not actually execute the helper file, but immediately returns the exports from the require
's cache. Therefore, test.afterEach()
does not run the second time, and Playwright does not register the hook for the second test file.Solution we recommend
If you don't want to repeat test.afterEach()
call in every file, you can structure your helpers as fixtures. This way the cleanup to remove a user will be right next to inserting a user, and there will be no global collection of insertedUsers
.
Thanks for explaining that!
This behavior is surprising to me because I thought that every test runs in isolation from the others so I wouldn't expect the require cache to be shared between them.
I personally don't really like the fixtures API and really prefer using regular functions for this. So I may try the workaround for now. But I do think the current behavior is incorrect and I'd prefer that it be changed. Will that happen?
The way Playwright isolates tests is that each test gets its own Browser Context. That way they are isolated. But in terms of Node, all the tests run in the same Node environment (modulo worker instance). So subsequent tests will actually run in the same Node, although will use different browser contexts.
When the test fails, we consider entire worker (including Node) unrecoverable and start a new one. That way we don't infect the subsequent tests with the past problems. But if the tests passes, we think it did not damage the test runner runtime. This isolation goes beyond those of any other test runners since we can recover from a test that is trying to break the entire process - worker will be shut down and a new one will take its place.
This behavior is surprising to me because I thought that every test runs in isolation from the others so I wouldn't expect the require cache to be shared between them.
Loading each test file in isolation in Node is expensive. It would take tens of seconds to simply list the tests in all the test files if we did so. We would need to load and run all the node modules over and over for each file. Running each test in full isolation would mean that we load all the modules for each test and are also running new browser instance for each test, which also is prohibitively expensive. On top of that, we would be deviating from the conventional Node model where require works the way Dmitry described into a new runtime with its own rules of execution.
But I do think the current behavior is incorrect and I'd prefer that it be changed. Will that happen?
That would be tricky as the behavior that you don't like is coming from Node and the only way for us to work around it is to run new Node for each test, which is unlikely to happen.
I personally don't really like the fixtures API and really prefer using regular functions for this.
I wonder if we can convince you give those a try. In our experience, once someone tried them they could never go back...
That makes sense and helps correct my mental model. Thank you!
I've tried the fixture model before with a login fixture and the API just felt awkward. It also is taught in the docs using page objects which I find to be problematic at scale (they often grow to do way too much).
I went ahead and rewrote it using fixtures https://github.com/epicweb-dev/epic-stack/pull/441 to tackle the cleanup problem. Was this similar to how you tried it? Having a fixture as a function is I think a great way of creating/disposing resources we haven’t explained in the docs.
Judging from the PR, it looks like you're recommending fixtures for this use case over using testing hooks at all is that right? Especially considering switching from using a hook to the getOnboardingData
fixture.
Judging from the PR, it looks like you're recommending fixtures for this use case over using testing hooks at all is that right?
Fixtures would be superior to hooks in all the cases when there is cleanup. If you need to prepare something for individual test or for individual worker, fixture is the way to go. Pairing before/after with global variables becomes an anti-pattern for resource management.
I think Max took it a bit too far - I don't think loginPage should accept insertUser, let me take a closer look at that code.
Thanks! I think I'm landing on something that I'm pretty happy with myself, inspired by Max's PR. I'll push up my changes soon.
I'm happy with the direction here. Thank you so much for helping clarify things for me :) I hope to return the favor by teaching people how to do this!
Ok, yeah, I'm happy where this landed: https://github.com/epicweb-dev/epic-stack/commit/85465dbf83cfc70486043666c08270636b6dafb7
Thanks again everyone!
@kentcdodds: I'm late with my version: https://github.com/pavelfeldman/epic-stack/commit/539deb5a68a175144bb92d12a85ec25178b3d3bc
System info
Source code
Link to the GitHub repository with the repro
https://github.com/epicweb-dev/epic-stack
Steps
Open up http://localhost:3000/users
Observe the 6 users that were seeded.
Then run the tests one-by-one:
npm run test:e2e:dev
.Check http://localhost:3000/users and observe that there are still only 6 users.
Then run all the tests:
npm run test:e2e:run
ornpm run test:e2e:dev
(either does the same thing).Check http://localhost:3000/users and observe there are many more new users.
Expected
I expect running all the tests at once to have the same cleanup functionality as running one test at a time.
Actual
Cleanup appears to not happen when running all the tests at once.
I've recorded a video demonstrating this behavior:
https://www.loom.com/share/07253aa0e7114996ab7a2b226722ce41
Here's the issue on the Epic Stack repo: https://github.com/epicweb-dev/epic-stack/issues/415
Relevant code:
Help is appreciated!