Open archfz opened 3 years ago
It is not replicated on:
Mint 20.1 5.4.0-66-generic Docker: 19.03.8 npm: 6.14.4 node: 10.19.0
A more detailed error log:
npm timing npm:load:configScope Completed in 0ms
npm timing npm:load:projectScope Completed in 0ms
npm timing npm:load Completed in 11ms
npm timing config:load:flatten Completed in 2ms
npm timing arborist:ctor Completed in 0ms
npm timing idealTree:init Completed in 1022ms
npm timing idealTree:userRequests Completed in 0ms
npm timing idealTree:#root Completed in 0ms
npm timing idealTree:buildDeps Completed in 1ms
npm timing idealTree:fixDepFlags Completed in 1ms
npm timing idealTree Completed in 1052ms
npm timing reify:loadTrees Completed in 1053ms
npm timing reify:diffTrees Completed in 41ms
npm timing reify:retireShallow Completed in 0ms
[ .........] / idealTree: timing reify:retireShallow Completed in 0ms
<--- Last few GCs --->
[1:0x4def0f0] 92372 ms: Mark-sweep 3973.7 (4136.6) -> 3964.6 (4139.4) MB, 5029.8 / 10.5 ms (average mu = 0.162, current mu = 0.007) allocation failure scavenge might not succeed
[1:0x4def0f0] 97260 ms: Mark-sweep 3981.3 (4139.6) -> 3972.0 (4144.1) MB, 4844.3 / 11.4 ms (average mu = 0.089, current mu = 0.009) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0xb12b40 node::Abort() [npm i]
2: 0xa2fe25 node::FatalError(char const*, char const*) [npm i]
3: 0xcf946e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [npm i]
4: 0xcf97e7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [npm i]
5: 0xee3875 [npm i]
6: 0xef25f1 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [npm i]
7: 0xef584c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [npm i]
8: 0xec1dfb v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [npm i]
9: 0x122adbb v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [npm i]
10: 0x160c599 [npm i]
Still reproduces on following specs (docker downgraded):
Still reproduces on following specs (docker and kernel downgraded):
Close match to @undoo host specs.
Can confirm this issue on Ubuntu 20.04 Linux 5.4.0-72-generic x86_64 Docker 20.10.1
I do not get the heap limit allocation error, but it does hang on
[ .........] / idealTree: timing reify:retireShallow Completed in 0ms
for at least a couple of minutes.
And it happens on all 3 Docker image versions described by OP
Apparently the out of heap doesn't reproduce anymore.
But the process still hangs for a very long time on reify:createSparse
on any image. For this specific case on my system it's a whopping 180000ms
. Also the process uses even up to 5gb ram. Locally still work fast.
Docker version 20.10.7, build f0df350 Npm: 7.18.1
Happened again on previously mentioned newer version, but the Ubuntu 16 docker image.
<--- Last few GCs --->
[1:0x4129fa0] 123169 ms: Mark-sweep 2009.5 (2083.5) -> 2005.9 (2088.7) MB, 1366.4 / 8.3 ms (average mu = 0.068, current mu = 0.019) allocation failure scavenge might not succeed
[1:0x4129fa0] 124546 ms: Mark-sweep 2010.2 (2088.7) -> 2008.1 (2095.2) MB, 1366.8 / 7.8 ms (average mu = 0.038, current mu = 0.008) allocation failure GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x140de99]
Security context: 0x17e2685408d1 <JSObject>
1: /* anonymous */(aka /* anonymous */) [0x1c377207c309] [/usr/lib/node_modules/npm/node_modules/chownr/chownr.js:~123] [pc=0xd6d95ce922a](this=0x00d8c5d804b1 <undefined>,0x1c3772070739 <Dirent map = 0x5770c416059>)
2: forEach [0x17e268556769](this=0x1c37720674e9 <JSArray[1514]>,0x1c377207c309 <JSFunction (sfi = 0x257cbac52951)>)
3: /* anonymous */ [0...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xa1a640 node::Abort() [npm i]
2: 0xa1aa4c node::OnFatalError(char const*, char const*) [npm i]
3: 0xb9a68e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [npm i]
4: 0xb9aa09 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [npm i]
5: 0xd57c85 [npm i]
6: 0xd58316 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [npm i]
7: 0xd64bd5 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [npm i]
8: 0xd65a85 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [npm i]
9: 0xd6853c v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [npm i]
10: 0xd2ef5b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [npm i]
11: 0x10716ec v8::internal::Runtime_AllocateInOldGeneration(int, unsigned long*, v8::internal::Isolate*) [npm i]
12: 0x140de99 [npm i]
Action: try to replicate
@darcyclarke I am also experiencing a very slow npm install
in a Docker container with NPM 7. I have reproduced the performance issue on multiple machines (Ubuntu 20.04 server and Windows using Docker Desktop with WSL2) and also on GitHub actions. Check out my https://github.com/sonallux/npm7-performance-issue repository and specifically, this workflow run https://github.com/sonallux/npm7-performance-issue/actions/runs/966881282 with different run configurations.
Docker node:14 |
Docker node:16 |
Local Node 14 | Local Node 16 | ||
---|---|---|---|---|---|
npm@7 install |
187s | 193s | 68s | 56s | |
npm@6 install |
70s | 64s | 54s | 51s | ~~ |
Edit: I have recorded new timings with the npm@7.24.0
version, see https://github.com/npm/cli/issues/3208#issuecomment-922504976
From the times it is clearly visible that an npm@7 install
on Docker takes significantly (more than 2x) longer than locally. An npm@6 install
on Docker does not have this performance issue.
Using the alpine docker image does lead to the same results and using the --legacy-peer-deps
flag does only very slightly (~10-20s) improve the timings.
@sonallux good summary.
For us this is a dealbreaker. As news were promising us better performance with npm7, we wanted to update immediately, but we have pipelines, most projects have pipelines, and it is there where performance matters the most.
@darcyclarke any news on this issue? Unfortunately, I am still experiencing the performance issue :/
I have recorded new timings with the latest npm version in this repo https://github.com/sonallux/npm7-performance-issue in this GitHub Actions run. The timings are all recorded with the time
command.
Docker node:14.17.6 |
Docker node:16.9.1 |
Docker node:14.17.6-alpine |
Docker node:16.9.1-alpine |
Local Node 14.17.6 | Local Node 16.9.1 | ||
---|---|---|---|---|---|---|---|
npm@7.24.0 i |
real user sys |
140.73 143.66 68.16 |
128.99 118.99 45.49 |
150.36 145.09 66.08 |
163.99 139.65 76.43 |
74.526 75.698 10.581 |
67.379 66.108 9.808 |
npm@7.24.0 i --legacy-peer-deps |
real user sys |
141.48 136.63 58.41 |
153.14 132.21 58.17 |
121.43 124.17 62.29 |
158.33 143.17 76.08 |
61.892 63.315 8.508 |
81.233 66.614 8.595 |
npm@6.14.15 i |
real user sys |
59.06 58.23 12.44 |
67.86 63.17 12.14 |
60.94 64.26 13.07 |
59.80 61.39 13.69 |
61.310 60.126 11.312 |
61.416 58.122 9.296 |
From the times it is clearly visible that an npm@7 install
on Docker takes significantly (~ 2x) longer than locally. An npm@6 install
on Docker does not have this performance issue. This issue is also not bound to the node version and docker base image used.
And this is not a GitHub actions specific problem, I can also reproduce this behaviour on Windows using Docker Desktop with WSL2 and on my Ubuntu 20.04 server.
I got the same issue in node 16.13 with npm 8.1 running docker in jenkins. The job is killed because of memory issues. Which means that it could be a memory leak.
I just want to add that I'm experiencing this issue as well on Node 16. This happened on Ubuntu 20.04 and CentOS 7 using different versions of k8s and Jenkins inbound agents. The only solution I found to alleviate the problem was to downgrade to npm v6. However npm ci
breaks. I've tried upgrading to the latest npm package 8.3.x and the same problem.
I am encountering this problem as well, I was able to workaround it by creating an empty node_modules
directory before running npm install/ci.
I am encountering this problem as well, I was able to workaround it by creating an empty node_modules directory before running npm install/ci.
This is extremely weird, but I can confirm that this:
mkdir node_modules
npm ci
Is twice as fast (npm@7) or three times as fast (npm@8) as just:
npm ci
Hopefully this factoid will help someone at npm diagnose the root cause finally.
I got the same issue in node 16.13 with npm 8.1 running docker in jenkins. The job is killed because of memory issues. Which means that it could be a memory leak.
I am getting same issue on same configuration.
Hi (@darcyclarke ?),
Is there anything we can do to help solve this? A setup with
node_modules
?)is still bugged by this (slow, high memory consumption compared to the same setup with npm 6.*).
I resolved this issue by going with non privileged user. Try using non root user in your docker file.
Is there anything we can do to help solve this? A setup with
Do try the thing mentioned above (create node_modules
before running npm ci
or npm i
) and report back.
Is there anything we can do to help solve this? A setup with
Do try the thing mentioned above (create
node_modules
before runningnpm ci
ornpm i
) and report back.
Speed improves, in my case, by 50% (18s to 12s) but the memory usage is still 4x larger compared to v6
don't know if this is the best way of checking max memory used, but if someone finds it useful:
/usr/bin/time -v npm ci
This bug is hunting us from + year ago 😢
I am encountering this problem as well, I was able to workaround it by creating an empty node_modules directory before running npm install/ci.
This is extremely weird, but I can confirm that this:
mkdir node_modules npm ci
Is twice as fast (npm@7) or three times as fast (npm@8) as just:
npm ci
Hopefully this factoid will help someone at npm diagnose the root cause finally.
@n3dst4 I have lost so much time trying different things on GitHub Actions (+ docker) to fix it after upgrading from npm@6 to npm@7 (this is where this issue has been introduced and it still persist on npm@8) using different docker containers, digging in to dependencies which may cause the issue, trying to add pre-dependencies which may be required for dependencies to be built and more.. hours of debugging.. and it was as simple as adding mkdir node_modules
before npm install
.. you're my life saver. You can't imagine how relieved I feel right now. Thank you 🙏. I own you a crate of beers or whatever you wish.
For us the main issue is the memory usage. Can't even test the speed of npm ci
, since our containers die due to lack of memory. We noticed that when npm ci
is run and the .npm cache folder exists then the memory usage is around 5gb. When it's run without the .npm cache folder the usage is less, around 4gb, but the difference is still huge compared to npm 6
edit:
my issue turned out to be #4896 -- which is a dep using the no-longer-supported git:// -- and there is a fix for that here. after doing that it is back to normal.
Repro package.json & package-lock.json: npm-oom-test.zip
npm install
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
Edit: running with --legacy-peer-deps
makes the install work and it's fast.
Moving to yarn. I don't need a 6gb memory docker to just install some deps. How can this still be an issue? 😢
Moving to yarn. I don't need a 6gb memory docker to just install some deps. How can this still be an issue? 😢
Or try pnpm
?
Current Behavior:
Running npm 7 in docker image has some serious performance issues. Running same npm 7 locally doesn't have this on the other hand. I was thinking about posting this in the docker repository, but I am still unsure of the culprit and how it is related to the problem. I have found that
reify:createSparse
is the task that slows down dramatically, which as I saw is creating directories. So I suspect this issue might be related to docker volumes. On the other hand using npm@6 there is no such issue in docker.Now when running same stuff on node:12-buster we also get the following out of heap error:
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
. This doesn't happen on node 14 and 16. (* actually just happend so I might be wrong here)Originally I have found this issue using an ubuntu 16 docker image, but these other images also fail in the same manner.
Expected Behavior:
No performance issues in docker (vs local). No out of heap errors.
Steps To Reproduce:
Test package.json
Example 1.
npm i
to create the package lock in the directory where you created the above package.json.rm node_modules/ -rf
docker run --rm -it -v $PWD:/app -w /app node:16-buster npm i --legacy-peer-deps --verbose
in the directory where you created the above package.jsonreify:createSparse
Example 2:
npm i
to create the package lock in the directory where you created the above package.json.rm node_modules/ -rf
docker run --rm -it -v $PWD:/app -w /app node:12-buster bash -c "npm i -g npm@7 && npm i --legacy-peer-deps --verbose"
Environment: