Closed amcasey closed 1 year ago
The usual approach to handling huge files is to stream the contents, rather than loading the entire file, but that's non-trivial with types files because - superficially, at least - we require random access to other parts of the file when following references. For example, type 999999 might reference type 1000, which one would not expect to be in the same sliding window. It's possible that careful review will reveal a pattern of accesses more amenable to streaming access.
In the meantime, my recommendation would be to call analyze-trace-file
without passing a corresponding types file and then decode the output manually using the output of simplify-trace-types
(which does stream input since it doesn't need random access).
Having said all that, 512MB isn't that much - maybe we can just build up an in-memory representation and look up references in that.
@jakub-g FWIW, searching online suggests that the value might be higher in newer and 64-bit node versions.
Thank you @mikeduminy!
Happy to help :) it also benefited me haha
In our monorepo, we have 25k+ TS files, this leads to
types.json
weighing 700MB+ (trace.json
is ~40MB).It seems that
readFile
has a limit if 512 MB (0x1fffffe8 characters).https://github.com/microsoft/typescript-analyze-trace/blob/fbca82e26714dc44ac76d667a1b67072c1c30de4/src/analyze-trace-file.ts#L286-L291
Was https://github.com/amcasey/ts-analyze-trace/issues/10 from @jakub-g