Open aaronjensen opened 5 years ago
FWIW, the slowdown appears to come from the recursive application of { readonly [K in keyof T]: DeepReadonly<T[K]> }
. If that's extracted to a separate type like this:
export type DeepReadonlyObject<T> = {
readonly [P in keyof T]: DeepReadonly<T[P]>
}
And used like this:
// ...
: T extends {}
? DeepReadonlyObject<T>
// ...
Then watch is just as fast as a regular tsc compile.
Unfortunately, the editor tools don't display the type as nicely since they actually show DeepReadonlyObject<{...}>
instead of { readonly ... }
.
@aaronjensen Thanks for the feedback! I'm surprised to hear that tsc
and --watch
are behaving differently - that's really interesting. Do you have any .tsbuildinfo
files that might be affecting incremental build? Are you using --incremental
or --build
?
A thousand file repro is fine, if we have access to the thousand files. :wink: They're not on GH, are they? Alternatively, we've just added a new --generateCpuProfile
flag to tsc. It would be very cool if you could gather a trace. It's plain text, so you can double-check for private info, but it should just be paths and they should already be sanitized. (If you decide to share a trace, please also note or include the version of tsc.js you were using when you collected it.)
They're not on GH, are they?
They are, but in a private repo. So unless you have access to private repos or there's a way to temporarily grant it (i'd need to receive my client's permission), then no, unfortunately.
Do you have any
.tsbuildinfo
files that might be affecting incremental build?
Not that I can see. Here's my tsconfig.json
. I have noEmit: true
and there's no built
directory generated.
{
"compilerOptions": {
"allowSyntheticDefaultImports": true,
"moduleResolution": "node",
"noEmit": true,
"pretty": true,
"outDir": "./built",
"allowJs": true,
"jsx": "preserve",
"target": "ESNext",
"module": "esNext",
"lib": ["es2016", "dom", "es2017.object", "dom.iterable"],
"experimentalDecorators": true,
"noUnusedParameters": true,
"noUnusedLocals": true,
"sourceMap": true,
"strict": true,
"baseUrl": ".",
"paths": {
"*": ["app/*", "types/*"]
},
"types": ["googlemaps", "webpack-env", "mixpanel", "gapi", "gapi.auth2"]
},
"include": ["./app/**/*"]
}
Here's the version the profiles were generated with
Version 3.7.0-dev.20191021
And here are some profiles.
before.txt
is the faster ("ours" above), with --incremental
which took 70 seconds iafter.txt
is the version from ts-essentials
with --incremental
, which took 7.5 minutes.after-not-incremental.txt
is the version from ts-essentials
without --incremental
, which took only 38 seconds.incremental uses ".d.ts" emit to decide what files need to be built. So please modify your tsconfig.json to remove noEmit: true
and instead add emitDeclarationsOnly: true
.
After that run tsc --extendedDiagnostics
to get the details about timings from the phase.
EDIT This was with emitDeclarationsOnly: true
but the actual compiler option is emitDeclarationOnly: true
, so I'm rerunning.
@sheetalkamat My original issue is about --watch
, does the same apply?
Here's the timing info from tsc --extendedDiagnostics
:
Files: 1594
Lines: 195461
Nodes: 750701
Identifiers: 262162
Symbols: 880782
Types: 454620
Memory used: 1906854K
Assignability cache size: 345227
Identity cache size: 15370
Subtype cache size: 116503
I/O Read time: 0.79s
Parse time: 1.18s
Program time: 3.39s
Bind time: 0.92s
Check time: 30.78s
transformTime time: 358.82s
commentTime time: 5.68s
printTime time: 384.62s
Emit time: 384.68s
Source Map time: 0.32s
I/O Write time: 0.64s
Total time: 419.78s
Note that I actually have some type errors in all of these builds that were introduced by 3.7 and I haven't taken the time to fix them yet.
yes.. --watch uses same logic as incremental (incremental is just about serializing the partial data that --watch uses to disk) As seen the issue lies with the emit time is what is the issue. @amcasey The .d.ts emit seems to be slow. @weswigham may have seen this before.
Run again with the proper settings:
Files: 1594
Lines: 195461
Nodes: 750701
Identifiers: 262162
Symbols: 880801
Types: 454634
Memory used: 830202K
Assignability cache size: 345227
Identity cache size: 15370
Subtype cache size: 116503
I/O Read time: 0.75s
Parse time: 1.31s
Program time: 3.51s
Bind time: 0.86s
Check time: 32.17s
transformTime time: 766.51s
commentTime time: 4.81s
printTime time: 787.95s
Emit time: 788.00s
I/O Write time: 0.35s
Total time: 824.53s
And the faster one, run the same way:
Files: 1594
Lines: 195459
Nodes: 750671
Identifiers: 262146
Symbols: 908404
Types: 461249
Memory used: 881506K
Assignability cache size: 349613
Identity cache size: 15478
Subtype cache size: 117735
I/O Read time: 0.68s
Parse time: 1.18s
Program time: 3.23s
Bind time: 0.92s
Check time: 30.75s
transformTime time: 17.47s
commentTime time: 0.43s
printTime time: 20.00s
Emit time: 20.03s
I/O Write time: 0.33s
Total time: 54.93s
I suspect it's the same issue as a couple other threads - your type is all anonymous in the slow form, so everything gets printed structurally in a recursive fashion, resulting in a huge declaration file. The alternate form is faster because the printback terminates with a named type. To confirm:
declaration: true
) and do a survey of the resulting declaration files - do any seem very large in the slow build? I imagine if you look at them you'd see why it takes so long.--generateCpuProfile profile.cpuprofile
to the compiler and send/upload it for us to look at - the detailed timing information is super useful. ❤️ @weswigham okay, will try. I've been running an incremental build with declaration: true and emitDeclarationOnly: true and it's been running for over 15 minutes... will let it keep going for a bit more but may have to come back to it later.
The type emission for the fast version looks totally wrong. Here's an example property:
fast:
form: import("../../types/utility-types")._DeepReadonlyObject<{
submited: any;
validations: any;
}>;
slow:
readonly form: {
readonly submited: boolean;
readonly validations: readonly {}[] | null;
};
This happens on 3.8 nightly and 3.6.4
The build took over 20 minutes and it actually failed to emit every file it should have. I'll have to get a profile from it later.
@weswigham The zip file posted above includes cpu profiles. I think you're right that it's spending too much time emitting declarations, probably because it's not using names. I'm seeing 65% of time in emitDeclarationFileOrBundle
and 57% of time spent in typeToTypeNode
.
Specifically a lot of time in typeToTypeNode
indicates it may be a bug in our type generation where we fail to terminate an infinite type at a reasonable size (or the generated types are just really really big)? Unless it's furthermore mostly in getAccessibleSymbolChain
(or a worker thereof), in which case it's expensive uncached symbol visibility calculations which I'd already like to fix, as I've seen such timings before. The example @aaronjensen posted above confounds both possibilities, though - I don't see why generating one of those forms over the other would be meaningfully slower, at least for types of those sizes (even with the "bug" in the first which is likely a direct consequence of a reverse mapped type used to produce that type).
Yep, looking at the traces I see pretty much all the time spent in trySymbolTable
and getAliasForSymbolInContainer
, which are workers user for calculating visible symbol chains. The whole process is uncached and quite naive right now (it's pretty much the same as it was since TS 1.8, but we rely on it more and more). It has been on my backlog as something that needs to be rewritten with a better more cacheable structure for perf.
There's also a fair chunk of time spent in GC and in path component-related operations that might also need some optimization - I'm guessing the large number of import types we manufacture require a large number of path calculations which end up making a lot of garbage - some caching may be in order here.
Are you good on the cpu profiles I provided originally? Anything else you'd like me to try?
In general, it'd be helpful to know the rules for when TypeScript expands a type and when it's able to give it a name. Looking at the generated types and it expands my entire redux state everywhere it's used, which is pretty verbose. Is there anything circular referencing things that could cause that?
@aaronjensen I think we have everything we need for now. @weswigham is there a mitigation we can suggest, other than rolling back to the original type?
Okay. One thing that helps a little is this hack:
type MyStateType = DeepReadonly<
{
foo: FooState,
bar: BarState
}
>
export interface MyState extends MyStateType {}
Is it a (separate) issue that is worth opening the DeepReadonly
generic type gets expanded when imported into another file? e.g. in foo.ts
:
import { DeepReadonly } from 'utility-types'
export type Foo = DeepReadonly<{ foo: string }>
and in bar.ts
:
import { Foo } from './foo'
export const useFoo = ({ foo }: { foo: Foo }) => {}
Turns into in bar.d.ts
:
export declare const useFoo: ({ foo }: {
foo: {
readonly foo: string;
};
}) => void;
I'd like/expect to see:
export declare const useFoo: ({ foo }: {
foo: import("./foo").Foo
}) => void;
edit it doesn't even need to be in a different file, if useFoo
is in foo.ts
the same thing happens to it:
import { DeepReadonly } from 'utility-types';
export declare type Foo = DeepReadonly<{
foo: string;
}>;
export declare const useFoo: ({ foo }: {
foo: {
readonly foo: string;
};
}) => void;
Okay, it turns out that:
import { DeepReadonly } from 'utility-types'
export type Foo = DeepReadonly<{ foo: string }>
export const useFoo: ({ foo }: { foo: Foo }) => void = ({ foo }) => {}
Results in:
import { DeepReadonly } from 'utility-types';
export declare type Foo = DeepReadonly<{
foo: string;
}>;
export declare const useFoo: ({ foo }: {
foo: Foo;
}) => void;
That's definitely surprising. I figured it out while trying to track down a "XXX has or is using name ..." issue. Is there a way/plan to properly infer the original type instead of the expanded type in this situation? I'd imagine it'd actually help the emit situation in this original issue.
edit
Apparently function
declarations infer properly... maybe that's enough of a reason to use top level function
declarations...
Created https://github.com/microsoft/TypeScript/issues/34776 to track some of the optimizations we have in mind here.
Thanks, @DanielRosenwasser. Any thoughts on this:
Is there a way/plan to properly infer the original type instead of the expanded type in this situation? I'd imagine it'd actually help the emit situation in this original issue.
I'd be happy to open a new issue for it. Essentially, It'd be great if:
const foo = (bar: Bar) => {}
resulted in emitting the same types as:
function foo(bar: Bar) {}
It wouldn't rewrite as a function declaration, but here's what you probably have in mind: https://github.com/microsoft/TypeScript/issues/34778
At least what I mean is that in that issue, you wouldn't have to think about the way foo
is declared because the only reason you should (in theory) run into that is alias expansion.
Yes, that's what I meant, not that it would rewrite, but that it would emit with the type alias intact as it does when defined with function
currently. I added an example to the issue you created. Thank you.
FWIW, I tracked down the bad declarations using an emit only build, and then used a file size explorer to determine which files were slowing the build down. This got my build a 3x speedup (70secs -> 20secs)
tsc -p ./tsconfig.base.json --declaration --emitDeclarationOnly --extendedDiagnostics --declarationDir ~/Temp/declarations
I used Disk Inventory X and found big files and cut them down.
Before:
After:
I believe I ran into this today, and can reproduce on the playground (or locally). Playground link.
This eventually crashes the watch process locally. Outside watch mode, there doesn't appear to be any impact. It can be worked around by moving the Filtered<>
call out of Filtered
and into RemoveEmptyObject
.
We are running into slow, initial tsc --watch
builds when running within Docker for Mac:
$ yarn run \
tsc --watch --diagnostics -p tsconfig.build.json
yarn run v1.22.0
$ /home/node/app/server/node_modules/.bin/tsc --watch --diagnostics -p tsconfig.build.json
[9:07:28 PM] Starting compilation in watch mode...
Current directory: /home/node/app/server CaseSensitiveFileNames: true
Synchronizing program
CreatingProgramWith::
roots: ["/home/node/app/server/migrations/1580263521268-CreateUserTable.ts","/home/node/app/server/src/app.controller.ts","/home/node/app/server/src/app.module.ts","/home/node/app/server/src/app.service.ts","/home/node/app/server/src/main.ts","/home/node/app/server/src/healthcheck/healthcheck.service.ts","/home/node/app/server/src/users/users.module.ts","/home/node/app/server/src/users/controllers/users.controller.ts","/home/node/app/server/src/users/entities/user.entity.ts","/home/node/app/server/src/users/services/users.service.ts"]
options: {"module":1,"declaration":true,"removeComments":true,"emitDecoratorMetadata":true,"experimentalDecorators":true,"target":4,"sourceMap":true,"outDir":"/home/node/app/server/dist","baseUrl":"/home/node/app/server","incremental":true,"tsBuildInfoFile":"/home/node/app/server/build_cache/.tsbuildinfo","watch":true,"diagnostics":true,"project":"/home/node/app/server/tsconfig.build.json","configFilePath":"/home/node/app/server/tsconfig.build.json"}
[9:08:33 PM] Found 0 errors. Watching for file changes.
Files: 959
Lines: 103856
Nodes: 386166
Identifiers: 135119
Symbols: 85445
Types: 75
Memory used: 152124K
I/O read: 2.68s
I/O write: 0.00s
Parse time: 63.43s
Bind time: 0.69s
Check time: 0.00s
Emit time: 0.00s
Total time: 64.11s
FileWatcher:: Triggered with /home/node/app/server/src/app.service.ts 1:: WatchInfo: /home/node/app/server/src/app.service.ts 250 Source file
Scheduling update
Elapsed:: 1ms FileWatcher:: Triggered with /home/node/app/server/src/app.service.ts 1:: WatchInfo: /home/node/app/server/src/app.service.ts 250 Source file
[9:09:16 PM] File change detected. Starting incremental compilation...
Synchronizing program
CreatingProgramWith::
roots: ["/home/node/app/server/migrations/1580263521268-CreateUserTable.ts","/home/node/app/server/src/app.controller.ts","/home/node/app/server/src/app.module.ts","/home/node/app/server/src/app.service.ts","/home/node/app/server/src/main.ts","/home/node/app/server/src/healthcheck/healthcheck.service.ts","/home/node/app/server/src/users/users.module.ts","/home/node/app/server/src/users/controllers/users.controller.ts","/home/node/app/server/src/users/entities/user.entity.ts","/home/node/app/server/src/users/services/users.service.ts"]
options: {"module":1,"declaration":true,"removeComments":true,"emitDecoratorMetadata":true,"experimentalDecorators":true,"target":4,"sourceMap":true,"outDir":"/home/node/app/server/dist","baseUrl":"/home/node/app/server","incremental":true,"tsBuildInfoFile":"/home/node/app/server/build_cache/.tsbuildinfo","watch":true,"diagnostics":true,"project":"/home/node/app/server/tsconfig.build.json","configFilePath":"/home/node/app/server/tsconfig.build.json"}
[9:09:17 PM] Found 0 errors. Watching for file changes.
Files: 959
Lines: 103856
Nodes: 386166
Identifiers: 135119
Symbols: 85446
Types: 88
Memory used: 152705K
I/O read: 0.01s
I/O write: 0.04s
Parse time: 0.03s
Bind time: 0.01s
Check time: 0.04s
Emit time: 0.21s
Total time: 0.29s
Without the --watch
flag:
$ yarn run \
tsc --diagnostics -p tsconfig.build.json
yarn run v1.22.0
$ /home/node/app/server/node_modules/.bin/tsc --diagnostics -p tsconfig.build.json
Files: 959
Lines: 103856
Nodes: 386166
Identifiers: 135119
Symbols: 85445
Types: 75
Memory used: 141631K
I/O read: 2.52s
I/O write: 0.00s
Parse time: 7.19s
Bind time: 0.67s
Check time: 0.00s
Emit time: 0.00s
Total time: 7.86s
Done in 8.48s.
When running in the same container image on Arch Linux, the total time with --watch
is 12.65s
and 10.99s
without.
Connects https://github.com/PublicMapping/district-builder-2/issues/43
@rbreslow Thanks for the report! Those numbers are pretty crazy - that seems like an awfully long time to parse that amount of code. Could you possibly re-run the numbers with --extendedDiagnostics
, which should provide a slightly richer view? Also, I may just be missing it, but it would be great to know which version of tsc you're using. Thanks!
@amcasey I re-ran the numbers with --extendedDiagnostics
.
$ yarn run tsc -v
yarn run v1.22.0
$ /home/node/app/server/node_modules/.bin/tsc -v
Version 3.8.3
Done in 0.30s.
With --watch
: https://gist.github.com/rbreslow/98066b8828bdaea5b97989a91a3c70ed
Without --watch
: https://gist.github.com/rbreslow/38f7940c16b6b9509f41720737071ca8
It seems that this is getting stuck on some parsing related to the node_modules
directory.
My apologies for the delay in getting back to you. I appreciate your prompt response.
This seems like recursive directory watching of node_modules
taking longer in the initial pass (because its linux)
DirectoryWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules 1 undefined Failed Lookup Locations
Related #36243
(because its linux)
It's worth reiterating that this output was generated on macOS using Docker for Mac (with runs the Docker Engine within a Linux-based VM). When I use Arch Linux, where the container is ran on the host's kernel, this delay isn't nearly as bad:
https://gist.github.com/rbreslow/cc9a0fa320e5fc249670c101345ab5b2
I also run into slow initial tsc --watch
build:
$ ./node_modules/.bin/tsc -w -p tsconfig.json --noEmit --diagnostics
[...]
Files: 710
Lines: 112959
Nodes: 408962
Identifiers: 140431
Symbols: 294471
Types: 101551
Memory used: 328829K
I/O read: 0.43s
I/O write: 0.00s
Parse time: 5.06s
Bind time: 1.70s
Check time: 25.50s
Emit time: 1424.63s
Total time: 1456.89s
Without watch:
$ ./node_modules/.bin/tsc -p tsconfig.json --noEmit --diagnostics
Files: 710
Lines: 112959
Nodes: 408962
Identifiers: 140431
Symbols: 293857
Types: 101551
Memory used: 373074K
I/O read: 0.41s
I/O write: 0.00s
Parse time: 3.81s
Bind time: 1.53s
Check time: 23.49s
Emit time: 0.00s
Total time: 28.83s
tsconfig.json:
{
"compilerOptions": {
"strictNullChecks": true,
"strictFunctionTypes": true,
"noImplicitAny": true,
"noImplicitReturns": true,
"noImplicitThis": true,
"outDir": "./built",
"noEmitOnError": true,
"sourceMap": true,
"module": "esnext",
"esModuleInterop": true,
"moduleResolution": "node",
"target": "es6",
"jsx": "react",
"lib": ["es6", "dom"]
},
"include": ["./src/**/*.ts", "./src/**/*.tsx"]
}
TypeScript version 3.8.3 Node.js version 12.16.1 MacBook Pro 2012 (El Captain, 2,5 GHz Intel Core i5, RAM 16Gb, SSD Samsung 840 Pro)
@myshov that's an insane amount of emit time. Does the same thing happen if you enable declaration emit (-d
or "declaration": true
), but not watch mode?
@amcasey
WIth -d
it is pretty fast:
$ ./node_modules/.bin/tsc -d -p tsconfig.json --noEmit --diagnostics
Files: 710
Lines: 112959
Nodes: 408962
Identifiers: 140431
Symbols: 293857
Types: 101551
Memory used: 371940K
I/O read: 0.45s
I/O write: 0.00s
Parse time: 3.87s
Bind time: 1.43s
Check time: 22.30s
Emit time: 0.00s
Total time: 27.59s
You need to run above command without --noEmit
to see the actual declaration emit happening.. I believe the root cause is declaration emit taking long in this scenario.
Hey @sheetalkamat! Thank you for pointing me out. Here is result without --noEmit
. Note that I use time
command because flag --diagnostics
didn't show actual time in my case :(
$ time ./node_modules/.bin/tsc -d -p tsconfig.json --diagnostics
Found 50 errors.
Files: 710
Lines: 112959
Nodes: 408962
Identifiers: 140431
Symbols: 294199
Types: 101551
Memory used: 1552030K
I/O read: 0.45s
I/O write: 0.00s
Parse time: 3.91s
Bind time: 1.21s
Check time: 21.79s
Emit time: 0.00s
Total time: 26.91s
real 22m38.904s
user 22m56.506s
sys 1m24.511s
(22 minutes of real time)
By the way with option -d
I encountered 50 new compilation errors that weren't present without flag before.
@sheetalkamat We don't warn about combining declaration
with noEmit
?
@myshov I'm surprised that wasn't included in the Emit Time (and "Total", unfortunately, is a sum of other rows and not the actual total). I'd be interested to know whether you see the same with --extendedDiagnostics
, but I'll understand if you don't want to wait another 22 minutes. :wink:
@amcasey here we go!
$ time ./node_modules/.bin/tsc -d -p tsconfig.json --extendedDiagnostics
Found 50 errors.
Files: 710
Lines: 112959
Nodes: 408962
Identifiers: 140431
Symbols: 294199
Types: 101551
Memory used: 1539987K
Assignability cache size: 326538
Identity cache size: 2187
Subtype cache size: 2272
Strict subtype cache size: 2030
I/O Read time: 0.13s
Parse time: 1.90s
Program time: 3.71s
Bind time: 1.82s
Check time: 25.14s
transformTime time: 1234.69s
Total time: 30.67s
real 21m5.990s
user 21m56.442s
sys 1m15.291s
(21 minutes of real time)
@amcasey @sheetalkamat If you interested I can give you an access to my project. It is not publicly available, so I can give you the access only privately and only for investigation of this issue.
@mysov I really appreciate the offer - it's surprisingly hard to get access to repro code. In this particular case, I think we have a pretty good understanding of the problem - we're just stuck on the best way to fix it. If you want to try it out when we have something, that would be very helpful. Thanks!
It's worth surfacing that all contributors to this issue appear to be experiencing different points of slowdown within the initial tsc --watch
process.
transform
, print
, and emit
times.program
time.emit
time.You're welcome to exercise our development environment as well, via instructions in the README
: https://github.com/publicmapping/district-builder-2. The steps to reproduce our issue are outlined in https://github.com/PublicMapping/district-builder-2/issues/43.
@rbreslow A solid point. My understanding is that we're emitting unnecessarily large declaration files, which you'd expect to slow down program construction in the next iteration. Unless you're saying that you were experiencing slow program construction in a clean build, in which case I'd be surprised to learn your problem is specific to incremental compilation.
@amcasey no problem! Ping me here or on twitter (https://twitter.com/myshov - dm is open) if you need to check your fixes on my codebase.
With typescript 3.8.3 on my project, with noEmit: true
I have:
1) tsc
Files: 4142
Lines: 659126
Nodes: 2584089
Identifiers: 932980
Symbols: 689984
Types: 124417
Memory used: 939224K
Assignability cache size: 20023
Identity cache size: 8766
Subtype cache size: 5421
Strict subtype cache size: 2480
I/O Read time: 0.80s
Parse time: 2.23s
Program time: 5.90s
Bind time: 1.99s
Check time: 8.57s
Total time: 16.46s
2) tsc --watch
Files: 4142
Lines: 659126
Nodes: 2584089
Identifiers: 932980
Symbols: 1444826
Types: 433635
Memory used: 1345080K
Assignability cache size: 99382
Identity cache size: 21148
Subtype cache size: 30403
Strict subtype cache size: 21165
I/O Read time: 1.67s
Parse time: 2.63s
Program time: 11.86s
Bind time: 1.96s
Check time: 33.62s
transformTime time: 29.20s
commentTime time: 0.63s
printTime time: 32.07s
Emit time: 32.13s
Total time: 79.57s
5 times slower. Also, just noticed - 3 times more types
I think I'm running into this one as well. It seems like it affects anybody who uses mobx-state-tree. I have a pretty big model but the emitDeclarationOnly .d.ts emitted is huge, longer than the original file, over 1.2mb, and full of repeated large types.
The output for smaller models looks like this:
/// <reference types="react-native-firebase" />
import { Instance } from "mobx-state-tree";
export declare const ToastModel: import("mobx-state-tree").IModelType<{
id: import("mobx-state-tree").ISimpleType<string>;
type: import("mobx-state-tree").ISimpleType<string>;
text: import("mobx-state-tree").ISimpleType<string>;
imageUrl: import("mobx-state-tree").ISimpleType<string>;
timeout: import("mobx-state-tree").ISimpleType<number>;
createdAt: import("mobx-state-tree").IOptionalIType<import("mobx-state-tree").IType<number | Date | import("react-native-firebase").RNFirebase.firestore.Timestamp, number | Date | import("react-native-firebase").RNFirebase.firestore.Timestamp, Date>, [undefined]>;
}, {}, import("mobx-state-tree")._NotCustomized, import("mobx-state-tree")._NotCustomized>;
export interface ToastModelType extends Instance<typeof ToastModel> {
}
Whereas the actual source file is shorter
import { types as t, Instance } from "mobx-state-tree"
import { FirestoreCompatibleDate } from "./FirestoreCompatibleDate"
export const ToastModel = t.model({
id: t.string,
type: t.string,
text: t.string,
imageUrl: t.string,
timeout: t.number,
createdAt: t.optional(FirestoreCompatibleDate, () => new Date())
})
export interface ToastModelType extends Instance<typeof ToastModel> {}
I don't know if this is expected, but it looks odd. Why emit import("mobx-state-tree") for every field? That import is repeated over 13,000 times in my large model. What you can't see in a short example is the repetition in the bigger model -- Looking at that 1mb+ d.ts output types are duplicated everywhere. The file is ~10k lines long and the d.ts output file is even longer.
It seems any usage of a MST model type causes TS to emit that entire type again inline. Some very large types are repeated 8-10x or more throughout the .d.ts output. I'm guessing this is a major reason for the slowness.
I am using the "mst ts best practice" of exporting all of my model instance types are interfaces. I double checked that I'm only using those interfaces and not using the Instance or SnapshotIn helper types anywhere else.
Hopefully we can figure this out and get it resolved! Slow TS performance really hampers the iteration cycle. Let me know if any more info is needed.
Diagnostic output when emitting declarations only:
Files: 668
Lines: 201935
Nodes: 625303
Identifiers: 206856
Symbols: 373530
Types: 165163
Instantiations: 629570
Memory used: 384285K
Assignability cache size: 203043
Identity cache size: 2502
Subtype cache size: 12092
Strict subtype cache size: 6842
I/O Read time: 0.09s
Parse time: 1.15s
ResolveTypeReference time: 0.04s
ResolveModule time: 0.51s
Program time: 1.93s
Bind time: 0.87s
Check time: 8.63s
transformTime time: 105.92s
commentTime time: 0.25s
printTime time: 107.11s
Emit time: 107.12s
I/O Write time: 0.12s
Total time: 118.55s
Done in 128.93s.
Diagnostic output with noEmit
Files: 668
Lines: 201935
Nodes: 625303
Identifiers: 206856
Symbols: 370693
Types: 164659
Instantiations: 618573
Memory used: 432506K
Assignability cache size: 202732
Identity cache size: 2500
Subtype cache size: 12089
Strict subtype cache size: 6842
I/O Read time: 0.12s
Parse time: 1.09s
ResolveTypeReference time: 0.03s
ResolveModule time: 0.46s
Program time: 1.79s
Bind time: 0.78s
Check time: 7.66s
Total time: 10.23s
Done in 10.74s.
From a profile over 60% of the work is in getAccessibleSymbolChainFromSymbolTable
Anybody figured out any tweaks to make TS a little faster especially with mobx-state-tree? This is really painful. Autocomplete suggestions in Webstorm or VSCode take 7-12 seconds to appear. That transforms TS from a wonderful coding companion to essentially a post-coding tool. It isn't usable in real time which IMO is a hugely valuable use case.
@AndrewMorsillo I believe this bug is focused on tsc --watch
performance. If you're seeing editor slowdowns, can you please file a separate issue? Feel free to tag me directly.
Hi there!
Unfortunately I have a similar issue. Compiling w/ tsc
= 9 sec. Compiling w/ tsc --incremental
= 70+ sec w/ TS v3.7.2. And even more = 175 sec w/ TS v4.0.2! Below: the extendedDiagnostics
output.
tsc
Found 19 errors.
Files: 1165
Lines: 248833
Nodes: 825007
Identifiers: 288832
Symbols: 288525
Types: 61170
Memory used: 374691K
Assignability cache size: 25280
Identity cache size: 813
Subtype cache size: 5718
I/O Read time: 0.19s
Parse time: 1.27s
Program time: 2.76s
Bind time: 0.92s
Check time: 3.62s
transformTime time: 0.41s
commentTime time: 0.12s
I/O Write time: 0.09s
printTime time: 1.25s
Emit time: 1.25s
Total time: 8.55s
tsc --incremental
, TS v3.7.2Found 19 errors.
Files: 1165
Lines: 248833
Nodes: 825007
Identifiers: 288832
Symbols: 288683
Types: 61375
Memory used: 696528K
Assignability cache size: 25280
Identity cache size: 856
Subtype cache size: 5718
I/O Read time: 0.25s
Parse time: 1.72s
Program time: 3.77s
Bind time: 1.08s
Check time: 4.68s
printTime time: 73.46s
Emit time: 73.49s
transformTime time: 69.19s
commentTime time: 0.82s
I/O Write time: 0.10s
Total time: 83.02s
tsc --incremental
, TS v4.0.2Found 19 errors.
Files: 1169
Lines: 249209
Nodes: 828641
Identifiers: 288747
Symbols: 252556
Types: 61381
Instantiations: 148721
Memory used: 550598K
Assignability cache size: 24181
Identity cache size: 918
Subtype cache size: 3417
Strict subtype cache size: 1018
I/O Read time: 0.15s
Parse time: 1.76s
ResolveTypeReference time: 0.10s
ResolveModule time: 1.23s
Program time: 3.47s
Bind time: 0.86s
Check time: 4.23s
printTime time: 167.28s
Emit time: 167.30s
transformTime time: 164.51s
commentTime time: 0.51s
I/O Write time: 0.08s
Total time: 175.86s
I'm attaching an archive for-git-issue.zip containing:
profile.txt
, the output of the --generateCpuProfile
switch for run # 2..d
files that were generated using the command from this comment. The sizes look like this:Maybe some recursive types that I created are the issue? However, when I first introduced them (a few month ago) I didn't notice the performance issue. Maybe it was there, but I noticed it later and/or maybe there are libs that I added meanwhile that participate to this issue.
NOTE: the above investigation covers our internal "showcase" project (1169 files, 61k types), a minimal app showcasing "library" functions. One of our production apps (1435 files, 83k types)
that uses the "library" has the same behavior, but in a more aggressive way: "normal" tsc
= about 10sec; "incremental" tsc
= 490 sec.
We're also suffering from this problem: our declaration emit time is ~300 seconds (compared to just 17 seconds type checking) and the output is ~15 MB.
I've managed to reduce the time down to 60 seconds using some of the tips in this issue, but (1) this is a bit of a dark art and (2) this is still a bit too long. I constantly restart tsc --watch
when switching between branches, otherwise it seems to run forever, so I would really like tsc --watch
initialisation to be equally as fast as tsc
(no --watch
).
Here is a screenshot of the CPU profile:
Here is an example of a file which generates a large declaration file, extracted from our app: https://github.com/OliverJAsh/ts-huge-and-slow-declaration-emit/blob/9b6f1e7e1afd445a3fabe18d1783cd002695fcda/src/types.ts#L127-L145.
If we can't completely fix the "huge declaration emit" problem, perhaps we could have a separate watch mode (for type checking purposes only) that doesn't use declaration emit. I understand this would probably lead to slower rebuilds, but that might be an acceptable trade off for some if it significantly reduces initialisation time.
@RyanCavanaugh @DanielRosenwasser @weswigham If you need a large example, you're welcome to look at the Unsplash repository.
@OliverJAsh see my PR, using interface trick the d.ts is now 133 lines
https://github.com/OliverJAsh/ts-huge-and-slow-declaration-emit/pull/1/files
Masking the type
with an interface
mostly prevent inlining of the whole types declarations
Comparing the --extendedDiagnostics
output,
Seems that it effects mostly "Subtype cache size" and "Strict subtype cache size"
before the trick:
Subtype cache size: 31
Strict subtype cache size: 51
after the trick:
Subtype cache size: 54
Strict subtype cache size: 31
As far as I understand, switching from type aliases to interfaces seems to help reduce the emitted declaration size (and therefore speed up declaration emit time). Related Twitter thread: https://twitter.com/drosenwasser/status/1319205169918144513.
However, this doesn't help in cases where the problematic types are generated dynamically using mapped types, as is the case when using tools such as io-ts or Unionize.
What can we do in these cases? It seems to me that the only solution would be for TS to treat type aliases in the same way it treats interfaces. That is, always preserve the display name and never inline the structure. Related: https://github.com/microsoft/TypeScript/issues/32287. /cc @DanielRosenwasser
However, this doesn't help in cases where the problematic types are generated dynamically using mapped types, as is the case when using tools such as io-ts or Unionize.
In some cases, you can mask it as well, as i'm doing for mobx-state-tree models https://gist.github.com/Bnaya/5c208956a3e5f3699595f9b9ca0ad3eb#file-good-ts
It's super verbose but works
TypeScript Version: 3.7.0-dev.20191011, 3.6.4, 3.5.2
Search Terms: DeepReadonly slow watch mode
Code
The slowness occurs on a codebase of around 1000 files. I can't distill it into a repro, but I can show the type that causes the slowness and an alternate type that does not.
I noticed the slowness when I replaced our implementation of
DeepReadonly
with the one fromts-essentials
. One thing I should note in case it is helpful, is that in our codebaseDeepReadonly
is only used about 80 times. It's also used nested in some instances, a DeepReadonly type is included as a property of another DeepReadonly type, for example.Here is the type from
ts-essentials
:Here is ours:
Expected behavior:
Both types, when used in our codebase would take a similar amount of time for both a
tsc
and the initial build oftsc --watch
.Actual behavior:
Our original
DeepReadonly
takes about 47 seconds to build usingtsc
. The initial build withtsc --watch
also takes a similar amount of time, around 49 seconds.With the
ts-essentials
version, atsc
build takes around 48 seconds. The initial build withtsc --watch
takes anywhere from 3-5 minutes.Playground Link:
N/A
Related Issues:
None for sure.