Open hectcastro opened 4 years ago
I was able to determine that this CPU overhead is only associated with the initial compilation when tsc --watch
is ran for the NestJS service.
I tried configuring the /home/node/app/server
volume mount with each type of performance tuning configuration, but the numbers didn't change.
I tried each available configuration for watching using the environment variable TSC_WATCHFILE
. I also increased each of the TSC_WATCH_POLLINGINTERVAL
levels to 5000ms.
Through the help of folks in https://github.com/microsoft/TypeScript/issues/34119, I ran tsc --watch
with the --extendedDiagnostics
flag. Things appear to get stuck on the initial directory watching of node_modules
:
[2K[1G[1myarn run v1.22.0[22m
[2K[1G[2m$ /home/node/app/server/node_modules/.bin/tsc --watch --extendedDiagnostics -p tsconfig.build.json[22m
[[90m4:55:32 PM[0m] Starting compilation in watch mode...
Current directory: /home/node/app/server CaseSensitiveFileNames: true
FileWatcher:: Added:: WatchInfo: /home/node/app/server/tsconfig.build.json 2000 undefined Config file
Synchronizing program
CreatingProgramWith::
roots: ["/home/node/app/server/migrations/1580263521268-CreateUserTable.ts","/home/node/app/server/src/app.controller.ts","/home/node/app/server/src/app.module.ts","/home/node/app/server/src/app.service.ts","/home/node/app/server/src/main.ts","/home/node/app/server/src/healthcheck/healthcheck.service.ts","/home/node/app/server/src/users/users.module.ts","/home/node/app/server/src/users/controllers/users.controller.ts","/home/node/app/server/src/users/entities/user.entity.ts","/home/node/app/server/src/users/services/users.service.ts"]
options: {"module":1,"declaration":true,"removeComments":true,"emitDecoratorMetadata":true,"experimentalDecorators":true,"target":4,"sourceMap":true,"outDir":"/home/node/app/server/dist","baseUrl":"/home/node/app/server","incremental":true,"watch":true,"extendedDiagnostics":true,"project":"/home/node/app/server/tsconfig.build.json","configFilePath":"/home/node/app/server/tsconfig.build.json"}
FileWatcher:: Added:: WatchInfo: /home/node/app/server/migrations/1580263521268-CreateUserTable.ts 250 undefined Source file
FileWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules/typeorm/index.d.ts 250 undefined Source file
DirectoryWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules 1 undefined Failed Lookup Locations
Elapsed:: 52683ms DirectoryWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules 1 undefined Failed Lookup Locations
FileWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules/reflect-metadata/index.d.ts 250 undefined Source file
FileWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules/typeorm/connection/ConnectionManager.d.ts 250 undefined Source file
FileWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules/typeorm/connection/Connection.d.ts 250 undefined Source file
FileWatcher:: Added:: WatchInfo: /home/node/app/server/node_modules/typeorm/driver/Driver.d.ts 250 undefined Source file
. . .
According to https://github.com/microsoft/TypeScript/issues/33338, we can't ignore the node_modules
directory because the TypeScript compiler will watch all included source files.
There was another issue, https://github.com/microsoft/TypeScript/issues/25018, to investigate poor initial performance with --watch
that was closed:
Closing this now since we have handled the optimization as best as we can for recursive directory watching where node doesnt support file system level events for recursive watching
Had a discussion with @maurizi about this issue:
Using Docker-on-host instead of Vagrant is implied by this issue. After we determine the cause and fix this CPU overhead on MacOS, we could remove Vagrant + Ansible, which is good since it reduces the technical components and upgrades we will need to maintain in the long term.
@aaronxsu Have you looked into Docker for Mac's new experimental file system?
I tried it out and achieved performance much closer to native on an I/O intensive application at @d3b-center. Here's an analysis (from a private repo) where I compared the same data analysis running in the Docker VM vs. using a native solution:
@rbreslow thanks for providing the analysis. I have not gotten a chance to look into this new file system yet, but this sounds familiar to something that the team member have recently dabbled on another client project (maybe).
This looks like a good hint to experiment when we get to this issue, hopefully, soon!
Description
With the changes introduced in https://github.com/PublicMapping/district-builder-2/pull/40, the
server
service causes the CPU to spike (100% utilization on all virtual machine available CPUs).Steps To Reproduce
README
to setup a local development environmentExpected behavior
CPU allocated to Docker are not saturated.
Actual behavior
CPUs allocated to Docker are saturated.
Your environment