Open oliviertassinari opened 5 months ago
Vitest supports running tests on changed files only.
Personally I'm against this, CI only passes when it fully passes. I haven't seen an implementation yet that didn't eventually evolve into ignoring more tests than intended.
@Janpot Ticky, I guess we could do stuff like https://github.com/mui/material-ui/blob/94cc3d0a66456713e6e25edf9a1ba35ad8df88bc/.circleci/config.yml#L191 but it won't really solve the problem.
Ideally, we would be able to control the dependencies that invalid tests and be pretty wide in what we consider invalidating https://vitest.dev/config/#forcereruntriggers.
I haven't seen an implementation yet that didn't eventually evolve into ignoring more tests than intended.
Vitest says they use module graph dependency, so maybe it's OK?
Vitest says they use module graph dependency, so maybe it's OK?
Yep, I meant "all solutions I saw before worked by manually replicating the module graph in CI" and that is not just a matter of if, but when does it go out of sync with the real module graph. The biggest problem with this is that it's almost never the person that caused it to go out of sync to notice the problem but an unsuspecting victim a few weeks later that finds some tests failing for no reason linked to the changes they are making. And since the failing test can't be connected to any known change it's also fully up to that person to debug it until the root cause is found.
I'm less hesitant to try a solution that understands the module graph as created by pnpm workspaces. But even that will fail when maintainers just freely ignore the package boundary for convenience (not blaming them, just demonstrating that it's easy to break the assumptions). Similarly, I suspect some of the aliasing we do also creates hidden dependencies.
The biggest problem with this is that it's almost never the person that caused it to go out of sync to notice the problem but an unsuspecting victim a few weeks later that finds some tests failing for no reason linked to the changes they are making. And since the failing test can't be connected to any known change it's also fully up to that person to debug it until the root cause is found.
The beginning for a horror movie 😄.
I could see a way out, but only works with step 1:
Summary
We ran all the tests in the mono repositories with Mocha, even if the source didn't change. It would be great to skip all the tests in which sources were untouched.
Examples
No response
Motivation
Solving #170 should much likely be done first since skipping test can be unreliable,