Open slikts opened 4 years ago
Sorry, I'll have to disagree that it "should" be replaced. That's a very strong sentiment and Babel has a bunch of caveats, from not type-checking to being semantically different to not respecting tsconfig.json
.
TSDX supports lots of users and I don't think those significant trade-offs make sense for a default setting. Strict defaults also mean solid best-practices out-of-the-box, which is the optimal experience.
You can customize your Jest config with a jest.config.js
and change transform
to use babel-jest
or turn off ts-jest
's diagnostics if that's what you're looking for.
That's advanced usage and different from the defaults, so I think it makes sense that you'd have to configure that and that it's not available out-of-the-box.
I'll also disagree with your characterization of that issue as "long-standing" -- it was responded to immediately and you were told that what you were seeking is impossible. You didn't respond with further clarifications, didn't provide a repro there or here, and there have been no up-votes in 1.5 years. I'm also fairly certain that tsc
behaves the same exact way: any files that are imported will be type-checked.
Maybe your problem is different from that, but without a repro or clarification, it's difficult to understand what else might be the case.
In any case, sounds more like an upstream issue than here. You can customize Jest with a jest.config.js
, so there's already a way to work through your specific use case.
I don't think a reproduction is required since the issue is clear: making breaking changes to code that has a few or more dependents causes the Jest output to be flooded with type errors, and it's not possible to filter out the errors and focus on just part of the code.
The workaround is to disable the ts-jest
diagnostics, which is a good idea in any case, since types are best checked in the editor and by running the type checker separately on the whole code base. There's little utility in type errors being duplicated in the tests, and especially errors like unused values, which commonly happen during development, and especially because the type errors obscure the useful parts of the test output.
Tooling and the defaults should guide users towards a sensible workflow, and it's unclear what the proposed sensible workflow would be in this case. A sensible workflow would be for the developer to be able to turn their focus to a specific part of the code, make changes to the API, then make the tests work (or vice versa), and only then update other parts and fix details like unused imports. It's what makes Jest a nice test runner, in that it supports working iteratively with short feedback cycles.
There are projects that use babel-jest and dwarf most others in scale, so that's not an issue. There are caveats, but they're offset by the benefits, including not checking types in the test runner, which isn't a caveat.
Regarding whether it's a long-standing issue: I reported it, found a workaround, and was told that the issue is inherent and so won't be fixed. There's no reason to pursue fixes to ts-jest
, particularly since there's alternatives, and it should just be relegated to legacy and niche use cases. I understand the preference towards established tools, but it's the same story as with TSLint being superseded by ESLint, ts-node
by babel-node
, etc.
I don't think a reproduction is required since the issue is clear
Both me and the person who responded in ts-jest
said variations of "or did I misunderstand?"
which is a good idea in any case, since types are best checked in the editor and [...]
Those are opinions. I've certainly missed type errors from my editor before. You seem to have been the only one to report this as an issue to ts-jest
and here, so I'm not sure that those opinions are widely held. If people upvoted either heavily, then the tides might turn.
and especially errors like unused values
Also an opinion, but I've seen the pattern of having a tsconfig.test.json
a couple of times, so maybe that's what you're looking for; you can disable lots of checks that way.
Tooling and the defaults should guide users towards a sensible workflow, and it's unclear what the proposed sensible workflow would be in this case.
Refactoring breaking changes and your specific method of refactoring -- not everyone refactors the same way -- is not the most common use case. The most common use case is the one that should be optimized. More specific ones can be configured.
There are projects that use babel-jest and dwarf most others in scale, so that's not an issue.
I didn't mention scale? Most JS ecosystem libraries also tend to be on the smaller side.
There are caveats, but they're offset by the benefits, including not checking types in the test runner, which isn't a caveat.
I'm not sure what benefits there are? What is and isn't a caveat and which is a better trade-off is an opinion. Well except that Babel explicitly calls those things caveats, so there's actually an official message on that... I had used the same term as Babel intentionally.
Regarding whether it's a long-standing issue
This is semantics, but I interpret "long-standing issue" as a bug that hasn't been fixed in a long time or a controversial decision. This isn't either and you seem to be the only one to report that as an issue.
and it should just be relegated to legacy and niche use cases. I understand the preference towards established tools, but it's the same story as with TSLint being superseded by ESLint,
ts-node
bybabel-node
, etc.
Sorry but these are not the same story. TSLint is deprecated and the TSLint team is helping the ESLint team. And ESLint has a dedicated TS parser that is separate from its Babel parser. It also lists caveats & trade-offs of using the Babel parser.
ts-node
and ts-jest
are not deprecated and Babel does not consider itself to be a replacement. ts-jest
I believe is even mentioned in the official Jest docs.
If those were the same story, then that would transitively suggest that tsc
is legacy and niche too, for the same reasons.
ts-node
, ts-jest
, TSLint and typescript-eslint
, and tsc
all make use of the TS Compiler. Babel does not afaik. That is the key difference, not whether they are "established" or not.
The pertinent question is about what would be the intended workflow. Breaking changes are part of a normal development workflow: either the tests are made to match the intended result and then the code is updated to pass the tests, or vice versa. In either case, both tests and types serve to find issues early. I outlined what happens currently with ts-jest
: you make changes, and then the test runner output overflows with type errors that are normally already available in the editor for the files that are being edited, and sometimes in a tsc
process for other files. I don't see the advantage of having type errors obscure which tests are failing or passing, so I don't see what the proposed workflow would be, other than slaving to fix transient errors.
The point about scale was in response to the number of users: there are more popular adjacent projects that take a different tack, and having more users is a reason to take workflow into careful consideration.
Breaking changes are part of a normal development workflow
Breaking changes shouldn't be made frequently, they are breaking after all.
In either case, both tests and types serve to find issues early.
And ts-jest
outputs both test and type errors.
I don't see the advantage of having type errors obscure which tests are failing or passing
Ok, then it sounds like ts-jest
is not the tool for you. I'll repeat again that that's your opinion and that your opinion doesn't seem to be widely held. ts-jest
is not a small, "niche", or "legacy" project as you've opined, it is in fact quite popular. Just because you don't like it doesn't mean other people don't.
I don't see what the proposed workflow would be, other than slaving to fix transient errors.
It sounds like what you can do is to temporarily turn off diagnostics or temporarily re-configure a tsconfig.test.json
when you're making breaking changes, and then can turn them back on after. Again, normally one does not make breaking changes frequently, it is not the most common use case.
there are more popular adjacent projects that take a different tack
Would you like to show examples? That would be a good way to support any argument.
and having more users is a reason to take workflow into careful consideration.
Again, not everyone has the same workflow as you. Again, community-driven projects shouldn't make big changes solely based off one person's opinion & workflow.
Sorry for being ambiguous; by "breaking changes" I'm talking about any type errors, and not just public API incompatibility. Even when talking about APIs, it's normal to create new features or new major versions, or to prototype or experiment, or to change private APIs, and temporary type errors are normal in all of these cases.
I can just restate the question about why would it be useful to not see which tests are failing if there are type errors. There are straightforwardly better workflows: seeing type errors in the editor or in a separate tsc
process, and using the test runner to just see test output. Jest speficially has nice tooling to support a workflow that focuses on a subset of test output, and these features are made useless by type errors in the current approach.
It's not about specific preferences; temporary type errors are part of the normal use case and shouldn't require workarounds like disabling diagnostics. If not, it should be clearly described how type errors in the test output would be advantageous over the alternative. The most notable caveat of babel-jest
is not supporting const enum
, but it's not relevant to diagnostics, so I'm not sure what the advantages would be.
I've also not called ts-jest
unpopular, but just that it's widespread use has more to do with it predating Babel support for TS and inertia. CRA in particular is an example that uses babel-jest
.
Sorry for being ambiguous; by "breaking changes" I'm talking about any type errors, and not just public API incompatibility. Even when talking about APIs, it's normal to create new features or new major versions, or to prototype or experiment, or to change private APIs, and temporary type errors are normal in all of these cases.
Ok, but a lot of those errors would be relevant, wouldn't they? The tests might even be failing because of type errors
or in a separate
tsc
process, and using the test runner to just see test output
Personally, I've never used a separate tsc
process (and TSDX doesn't seem to support it well per your #529 ). I'll fix everything in my editor before running tests, and if the test runner gets type errors it's because I missed something.
and these features are made useless by type errors in the current approach.
The type errors are only on files that are imported if I'm understanding correctly, so I wouldn't say that feature is made useless -- it is run on a subset. It sounds like you'd be okay with type-checking in tests if it were limited to only the code ran by the tests as opposed to the entirety of the file -- is that correct?
That's difficult to do considering it'd require outputting type errors based on code coverage, but would be something to clarify for ts-jest
.
temporary type errors are part of the normal use case and shouldn't require workarounds like disabling diagnostics. If not, it should be clearly described how type errors in the test output would be advantageous over the alternative.
Well running a separate tsc
process would also give these as type errors. Your workflow involves ignoring them temporarily the same way. Not everyone has that workflow (I don't).
The most notable caveat of
babel-jest
is not supportingconst enum
, but it's not relevant to diagnostics
Not sure what you mean by "not relevant". If babel-jest
doesn't support it, then it will fail to parse the file -- it limits what users can use. The semantic differences are where my primary opposition lies.
I've also not called
ts-jest
unpopular, but just that it's widespread use has more to do with it predating Babel support for TS and inertia
ts-jest
is recommended by Jest officially "if you want type-checking". Your issue isn't with ts-jest
itself, but the fact that its default is to type-check. I don't have in-depth knowledge of ts-jest
's issues over time, but based on your issue, it didn't seem like the decision to type-check by default receives backlash.
CRA in particular is an example that uses
babel-jest
That's definitely a relevant workflow to consider given TSDX borrows some details from CRA.
I'm not sure that their reasoning is quite the same as yours though. CRA is mainly for JS users and also supports TS while TSDX is "100% TypeScript focused". babel-jest
supports both but is it ideal for TS users? Idk
TSDX is also for library development.
Are there any other examples that you think would be relevant to consider? Notably this is a decision Jared made and I have a feeling his opinion is far stronger than mine. My primary concern is on the semantic differences, though currently we'd also have to do something about #529 or #352 to better support non-editor type-checking if type-checking were removed from tests. In any case, changing the test config would be a fairly big breaking change.
Ok, but a lot of those errors would be relevant, wouldn't they?
ts-jest
diagnostics show every type error from every test with no option to filter out the irrelevant errors. That includes errors that don't affect the runtime, like unused values.
The tests might even be failing because of type errors
The tests aren't even running if there are type errors, and if they are running, you can't see that they're failing or not because of the type errors. I showed a screenshot of what it can look like.
Personally, I've never used a separate
tsc
process …
This is not a workable approach for larger code bases, since editors just check the files that are open.
I'll fix everything in my editor before running tests …
That's again not a workable approach for larger codebases, it's unnecessary for errors like unused values, it's also unncecessarily manual, and it runs counter to an incremental workflow where changes are made step by step, like tests are made to work one by one, etc.
… I wouldn't say that feature is made useless -- it is run on a subset. …
The output filtering in Jest just doesn't work on the type errors. There is no subset.
Well running a separate
tsc
process would also give these as type errors.
Running tsc
separately will not obscure the test output or prevent the test from running. It's also faster than running the build, and can be done in watch and icremental modes, which matter in particular for larger code bases, where you don't want to wait minutes for checking the types.
Your workflow involves ignoring them temporarily the same way. Not everyone has that workflow (I don't).
I can say from extensive experience and using TS since it was new that there's nothing special about the workflow of fixing type errors or any errors incrementally. The point isn't to slave to the tool but to have it make development easier. For example, that's also why Parcel doesn't check types, since during development it's normal to have errors, and they should be checked just when they need to be; in the CI server at the latest.
Not sure what you mean by "not relevant".
I meant that const enum
in ts-jest
will work with or without diagnostics. Using const enum
is also generally discouraged for library code since it can pose trouble to downstream users using Babel for transpilation.
… TSDX is "100% TypeScript focused".
Nominally, but CRA has a significantly better TS DX.
I figured I'd add my two cents since we're in more of a discussion phase. One of the things I find most attractive about TSDX is that it's somewhat strict by default. It's designed for building libraries and shared code. When I'm working on those kinds of projects, I really do want the coding standards to be higher. I'm going to want more strict and difficult to circumvent rules as a result. I realize that might slow development slightly because they'll have to comment out a variable they aren't using yet to get tests to pass but that's honestly fine with me. I doubt I'm the minority in that. Developers who want a less strict experience really have other options.
This is in no way a suggestion to lower code standards; all code should still be type checked, but it's not necessary to do it in the test runner because of the reasons mentioned before. Even in the very simplest case of fixing unused identifiers, it takes away the limited developer attention to focus on a chore; attention that could instead be spent on improving code quality in a more meaningful way, because the unused identifiers would still break the build anyway and not pass CI, for example. Cases other than unused identifiers can easily be either much more involved to fix or even impracticable, leaving the user either to develop without tests (bad) or having to fix the setup (also bad).
To reiterate, the suggestion isn't to stop the types from being checked, and the question to answer is why should it be the test runner that's checking the types, given the downsides.
Issue about ts-jest vs Babel; quote from repo owner:
Even in its current state, I think using typescript via babel should be the way to go for testing. Type-checking should be separate from testing and tsc can be used for that directly. There's no benefit in having the test runner also do the type-checking.
Another user about errors:
In local development we're ok to ignore type errors in tests, cause either they can be caught at a later stage, or tests will fail anyway because something is undefined/missing.
Here's a CRA feature request about type checking in tests where ts-jest's possible legacy status is cited by a dev to rule it out:
We are eager to get some form of type checking in for test files, but we also don't want to adopt a library that potentially won't be maintained going forward. We are exploring other options to satisfy this that better fits in our current ecosystem of tools.
The issue I linked to initially was from before Babel support for TS, and it was just one that I was familiar with.
For what it's worth, unless one has a strong desire to type-check during tests, the main contributors of both ts-jest
(quoted above) and jest
are advising that people use Babel for TS tests.
It seems like since we already have a solid compilation step, we shouldn't need to also typecheck our tests 🤷♂
I also figured that it'd be nice to not have Jest be a chain dep that relies on updates from ts-jest
before implementation.
So in the ts-jest
issue linked above, which is also over a year old, the same comment also has Jest's core contributor saying:
I personally agree with this, but I also very much understand those that think tests should be type checked before they are run. So I think there'll always be a place for
ts-jest
.
And the repo owner also said in that issue:
I know this isn't a universally shared opinion (we've had disagreements even amongst the contributors here on this)
It seems like since we already have a solid compilation step, we shouldn't need to also typecheck our tests 🤷♂
The build step doesn't type-check your actual test files. The discussion here isn't on using build
to substitute ts-jest
's type-checking, but on tsc --noEmit --watch
to substitute it.
This is not a workable approach for larger code bases
We've already past this, but for double posterity, I did say in the very same sentence that I use my editor and then use ts-jest
to get anything missed. I'm also quite good at and have refactored hundreds - thousands of files, so I'd disagree that it's not workable as I myself am a counter-example to that.
This is in no way a suggestion to lower code standards; all code should still be type checked, but it's not necessary to do it in the test runner because of the reasons mentioned before
Yes, for other readers, this has evolved into a discussion of where to type-check. There is, however, a different notion of "strictness" where one means catching errors earlier (e.g. in tests) / doing this as part of the test step specifically (as was mentioned above), etc. This is an opinion and one workflow, but it is the current one TSDX follows, so any shift would be quite big.
Here's a CRA feature request about type checking in tests where ts-jest's possible legacy status is cited by a dev to rule it out
This is also dated. ts-jest
is currently maintained fairly well with a few maintainers and is up-to-date with Jest 25.
That issue also requests type-checking support in tests and CRA team is supportive of that, just, at the time, was not supportive of ts-jest
specifically. There isn't really another alternative however, jest-runner-tsc
isn't viable yet for a number of reasons (see the issues, other refs to it, etc).
It's notable that:
tsc
for type-checking very well out-of-the-box for some time before we can consider changing, as we'd want users to be used to that workflow or at least know about it well as an alternative. Otherwise, we risk users missing type-checking in places, which would result in less strictness.My other concern is that @babel/preset-typescript
doesn't respect tsconfig.json
, so the way it transpiles may result in semantic differences (as I mentioned at the beginning). I am not sure what the exact differences may be, however.
We'd also have to auto-merge @babel/preset-typescript
with users' babelrc
, which requires some work and might have edge cases, and relates to a few issues on proper Babel support in tests.
Since there seems to be an interesting divide in schools of thought regarding type-checking in test files... I wonder if we can maintain backwards compatibility by letting people opt-in to Babel for tests, but keeping the existing ts-jest behavior as default.
I don't know that the divide is interesting, since the point about workflow hasn't been substantively addressed, aside from mentioning that some don't find it as a problem.
The workflow issue is this: unit tests provide isolation, and this isolation allows focusing on just a part of the code. For example, if you make an incompatible change, the isolation allows you to fix the incompatibility test-by-test. Jest supports this workflow very well, since it has a number of ways to filter which tests are run. For example, you change a method to take different parameters, and then fix all the call sites separately instead of having to do it all at once.
The test output becoming obscured by type errors breaks this workflow. If you make an incompatible change, you have to fix it everywhere to make use of tests, and there's no workaround for it by design.
Edit: I've not used TSDX since, because my use cases often involve designing APIs, and it's just not viable with the ts-jest
workflow, since there's often incompatible changes.
Hmmm. This is a very interesting debate.
I find typechecking in jest to be very very useful. It is not always easy to identify a breaking type change with just a tsdx build. However, the current workflow has bit me in the ass a few times, especially when writing to formik 2. not being able to isolate a single test at a time because of type errors was super annoying and resulted in larger, although more stable, commits. I can see how this would be exacerbated on large projects.
Regardless of the above, what @agilgur5 has mentioned about tsconfig not being respected and compilation possibly being different is a red flag.
Before giving my 2 cents, I think it would be helpful for me to see a repo with babel-jest setup as discussed in the OP issue (without tsdx). Would be helpful to compare the workflow.
@jaredpalmer I also mentioned several pre-reqs that TSDX would need to support for this kind of workflow, so it's relatively far out to even consider.
@jaredpalmer @kylemh @slikts also in my previous comments I've listed various workarounds that one can do within ts-jest
already, such as turning off diagnostics entirely or turning off certain errors in a tsconfig.test.json
. Turning off type-checking should make this a very similar workflow to babel-jest
. So you can already opt-out of it with a few lines of code, and I've even given directions for it before: https://github.com/formium/tsdx/issues/681#issuecomment-612929386.
Again, this is discussion is now around what the default workflow should be. Other than configuring ts-jest
, almost everything in TSDX can be opted out, including tsdx test
. I am supportive of @slikts 's concerns, however, as I listed in my previous comment, for that to be the default a number of things have to be made easy (including opting in to the current behavior) and well supported. And the semantic differences need to be investigated and solved for as well. In the meantime, opting out is quite easy per above.
aside from mentioning that some don't find it as a problem.
That's an understatement for there being several groups of users in and out of TSDX in favor of the current behavior...
Well summarized. I think we keep the default behavior then and document the workarounds explicitly
Currently
ts-jest
is used for tests, and it has a long-standing issue that it doesn't support a sensible workflow for making changes: https://github.com/kulshekhar/ts-jest/issues/798If I'm making breaking changes to the code and want to focus on a specific test suite or case, there's no way to filter out the "diagnostics" from the other test cases, so the output can become flooded with messages.
The solution is to let Babel handle TS, since it doesn't flood the Jest output with type errors.