Open jasonkuhrt opened 6 months ago
Having to keep these in two separate modules is not ideal, since it creates distance between these concerns.
I might be missing the intent of this issue as a feature request of Vitest typecheck mode, but as an alternative approach, I think it's perfectly reasonable to have both expect
and expectTypeOf
for a concrete "term" in a normal test file and let typescript spot type errors as usual without Vitest typecheck mode.
I was looking at TRPC code base the other day and I found such example:
test('decorate independently', async () => {
const result = await ctx.proxy.getContext.mutate();
expect(result).toEqual({
user: mockUser,
foo: 'bar',
});
expectTypeOf(result).toEqualTypeOf<{
user: User | null;
foo: 'bar';
}>();
});
I'm not sure this approach would fit your use case, but I thought I'd share since I found this pattern has nice ergonomics and I just started to employ this on my own project.
@hi-ogawa thanks, I have done similar in the past but what this issue is asking is to bring support for the test runner output too, not just the IDE or tsc
checks.
I believe I just ran into the same issue / desire... I set test.typecheck.include
to ['**/*.{test,spec}?.?(c|m)[jt]s?(x)']
(removed the -d
), intending to pick up all my test files this way. I was very surprised to discover that this disabled the normal tests. I thought I could have have test files containing both normal and type-level tests and look at test output that combined these.
Is there a reason to prefer separated tests?
Is there a reason to prefer separated tests?
This is a technical limitation for now.
Clear and concise description of the problem
Currently type tests require being in a different module than term ("concrete" seems to the the term used by the docs) tests.
This creates a separation of function rathe than concern.
That is, there are times when a test case concerns assertions for both the terms and types.
Having to keep these in two separate modules is not ideal, since it creates distance between these concerns.
Suggested solution
I'm really not sure what design and/or implementation considerations would be at play off the top of my head. I think it is desirable to have test runner terminal/ui/etc. output "merge" the failures from a test case into a unified view, but I think its not simply a matter of taking the current outputs and "concating" them... Some thinking will be required probably how best to present them, grouping vs interleaved, etc.
Alternative
No response
Additional context
No response
Validations