Open hardillb opened 10 months ago
@nodejs/platform-s390
@hardillb have you tried this with the other non-Alpine containers?
@hardillb I assume there must be some additional setup required in terms of installing QEMU etc. Can you provide the platform you were running on as well as the full set of steps for setup required before you run the docker line.
Running on my x86 machine just gave me:
[midawson@drx-hemera ~]$ docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v ./qemu.log:/qemu.log --rm node:18-alpine npm install node-red:3.1.0
docker: Error response from daemon: create ./qemu.log: "./qemu.log" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.
The latest version of docker will use relative paths for volume mounts, but the older versions need a full path so the full instructions are:
touch qemu.log
to create an empty filedocker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine npm install node-red@3.1.0
here $(pwd)
should expand to the current working directory.(also changed the node-red:3.1.0
to node-red@3.1.0
)
I get the same behaviour swapping out the 18-alpine
tag with the 20-alpine
container, but I don't see it with the Debian based builds (e.g. 20-slim
) which strongly suggests this may be an alpine/muslc based issue.
I also get the same behaviour when running on a Fedora 38 or an Ubuntu based host OS (and on GitHub Actions which is where I originally saw this)
Do the host and guest system differ in endianness? I'm guessing the answer is 'yes' because s390x is big endian and the host is presumably x86_64? You're quite possibly hitting https://github.com/qemu/qemu/commit/44cf6731d6b and that's fixed in qemu 8.
All the hosts have been AMD64 based so far.
I'll try and work out how to get a copy of qemu 8 and docker to test. Assuming it is then the problem may be how to get this all deployed to GH Actions with the actions from Docker.
But also this used to work, so given the fix is to do with IPv6 addresses, had npmjs.org rolled out IPv6 recently?
I know that NodeJS changed it's default behaviour about which address it presents first if both IPv4 & IPv6 addresses are returned via DNS.
I know there is the --dns-result-order=ipv4first
command line argument, is there an environment variable (which will be easier to get in scope) that will do the same I can test?
I've just tested with the following:
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine node --dns-result-order=ipv4first /usr/local/bin/npm install node-red@3.1.0
Which should force the use of an IPv4 address and I get the same hang, so that looks like it may not be the mentioned qemu issue.
@hardillb apologies for nosing into this issue, but, to check my understanding, I believe you're running this on an x86-64
box, using QEMU to provide the s390x
emulation ? Is that the case ?
Initially, I read it that you were running this on an actual s390x
box but, of course, you'd not then need QEMU ?
I say that because, for me, using Ubuntu 20.04.6 LTS
with kernel 5.4.0-125-generic
on an IBM Z ( s390x
) Virtual Server instance (VSI), Docker doesn't hang: -
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine npm install node-red@3.1.0
but instead returns: -
npm ERR! Tracker "idealTree" already exists npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-10-30T14_27_59_072Z-debug-0.log
with: -
docker --version
Docker version 24.0.5, build 24.0.5-0ubuntu1~20.04.1
So, just to check, this is QEMU emulating s390x
where the problem arises ?
@davidhay1969 yes, this is when using the docker buildx build
multi-arch capabilities on x86_64 (the default GH Action runner) which uses Qemu to run the s390x build.
Hi @hardillb OK, so I used the following setup: -
x86-64
22.04.3 LTS
5.15.0-79-generic
Docker version 24.0.7, build afdd53b
dpkg --list | grep qemu
ii qemu 1:6.2+dfsg-2ubuntu6.15 amd64 fast processor emulator, dummy package
and setup multi-platform support via Multi-platform images
docker run --privileged --rm tonistiigi/binfmt --install all
which returned: -
{
"supported": [
"linux/amd64",
"linux/arm64",
"linux/riscv64",
"linux/ppc64le",
"linux/s390x",
"linux/386",
"linux/mips64le",
"linux/mips64",
"linux/arm/v7",
"linux/arm/v6"
],
"emulators": [
"python3.10",
"qemu-aarch64",
"qemu-arm",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-s390x"
]
}
With that in place, I was able to start a container from the node:18-alpine
image: -
docker run --platform linux/s390x -it node:18-alpine sh
and confirm the s390x
emulation: -
uname -a
Linux d95867f9014d 5.15.0-79-generic #86-Ubuntu SMP Mon Jul 10 16:07:21 UTC 2023 s390x Linux
I was then able to run the Node image as per your suggestion above : -
touch qemu.log
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine npm install node-red@3.1.0
which, as per my experiment on a real s390x
VSI returns: -
npm ERR! Tracker "idealTree" already exists
npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-10-30T15_47_18_622Z-debug-0.log
and qemu.log
gets populated full 'o stuff
If I time the docker run
command, I see: -
real 0m16.148s
user 0m0.026s
sys 0m0.037s
So it takes a wee while but doesn't hang
If I prune Docker via docker system prune -a --volumes
it takes a while longer, including the image pull: -
time docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine npm install node-red@3.1.0
Unable to find image 'node:18-alpine' locally
18-alpine: Pulling from library/node
47539bffe0f4: Pull complete
48ff038cd430: Pull complete
8302ffd6337d: Pull complete
4261b725713f: Pull complete
Digest: sha256:435dcad253bb5b7f347ebc69c8cc52de7c912eb7241098b920f2fc2d7843183d
Status: Downloaded newer image for node:18-alpine
npm ERR! Tracker "idealTree" already exists
npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-10-30T15_51_27_865Z-debug-0.log
real 0m29.484s
user 0m0.032s
sys 0m0.051s
Apart from Multi-platform images I also reviewed Emulating a big-endian s390x with QEMU, especially given that I so rarely use docker buildx
etc. having access to a bunch o' s390x
infrastructure
Not sure whether this is of ANY use ?
@hardillb were my ramblings of any small use ?
It's still failing for all s390x builds running on GitHub Actions
And when I run it locally, the only major difference is I'm on a 6.2.0-35 kernel
@davidhay1969 just out of interest does your test machine have IPV6 access?
Hey @hardillb just tested with a new x86-64
VM: -
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
uname -a
Linux c27539v1.fyre.ibm.com 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
docker run --privileged --rm tonistiigi/binfmt --install all
{
"supported": [
"linux/amd64",
"linux/arm64",
"linux/riscv64",
"linux/ppc64le",
"linux/s390x",
"linux/386",
"linux/mips64le",
"linux/mips64",
"linux/arm/v7",
"linux/arm/v6"
],
"emulators": [
"python3.10",
"qemu-aarch64",
"qemu-arm",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-s390x"
]
}
touch qemu.log
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine npm install node-red@3.1.0
npm ERR! Tracker "idealTree" already exists
npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-11-12T07_55_04_470Z-debug-0.log
With regard to IPv6, as of now, the answer is "No", the VM doesn't have an v6 address, as evidenced by: -
ip -6 address
<NOTHING RETURNED>
I'll enable IPv6 and see what goes 💥 💥
Using a VSI on IBM Cloud now, rather than something running here in HURS: -
uname -a
Linux davehay-12112023 5.15.0-1025-ibm #28-Ubuntu SMP Tue Jan 24 17:51:55 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
ip -6 address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 fe80::2ff:fe30:ed82/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 state DOWN
inet6 fe80::42:11ff:fe07:85aa/64 scope link
valid_lft forever preferred_lft forever
docker run --privileged --rm tonistiigi/binfmt --install all
{
"supported": [
"linux/amd64",
"linux/arm64",
"linux/riscv64",
"linux/ppc64le",
"linux/s390x",
"linux/386",
"linux/mips64le",
"linux/mips64",
"linux/arm/v7",
"linux/arm/v6"
],
"emulators": [
"qemu-aarch64",
"qemu-arm",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-s390x"
]
}
docker run --platform linux/s390x -it node:18-alpine sh
uname -a
Linux 805650c563e9 5.15.0-1025-ibm #28-Ubuntu SMP Tue Jan 24 17:51:55 UTC 2023 s390x Linux
exit
touch qemu.log
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine npm install node-red@3.1.0
Unable to find image 'node:18-alpine' locally
18-alpine: Pulling from library/node
47539bffe0f4: Pull complete
48ff038cd430: Pull complete
8302ffd6337d: Pull complete
4261b725713f: Pull complete
Digest: sha256:435dcad253bb5b7f347ebc69c8cc52de7c912eb7241098b920f2fc2d7843183d
Status: Downloaded newer image for node:18-alpine
npm ERR! Tracker "idealTree" already exists
npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-11-12T08_57_30_795Z-debug-0.log
So, with both IPv4 and IPv6, things seem to just work for me ? 🤔 🤔 🤔 🤔
I've been playing with this again (as it's still a problem). I've been using AWS EC2 machines to try out a few different options.
Hmmm, much weirdness here 😢
I just had another quick poke on an x86-64
box, running Ubuntu 20.04.6 LTS
uname -a
Linux c1952v1.fyre.ibm.com 5.4.0-169-generic #187-Ubuntu SMP Thu Nov 23 14:52:28 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 61
model name : Intel Core Processor (Broadwell, IBRS)
stepping : 2
microcode : 0x1
cpu MHz : 2199.998
cache size : 16384 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa srbds mmio_unknown
bogomips : 4399.99
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 61
model name : Intel Core Processor (Broadwell, IBRS)
stepping : 2
microcode : 0x1
cpu MHz : 2199.998
cache size : 16384 KB
physical id : 1
siblings : 1
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa srbds mmio_unknown
bogomips : 4399.99
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
and spun up a container using the node:18-alpine
image
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
node 18-alpine d7cb21d1a90f 2 weeks ago 131MB
tonistiigi/binfmt latest 354472a37893 17 months ago 60.2MB
but just starting a shell instead of the npm install
command: -
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine sh
uname -a
Linux ee91155f16e6 5.4.0-169-generic #187-Ubuntu SMP Thu Nov 23 14:52:28 UTC 2023 s390x Linux
npm version
{
npm: '10.2.3',
node: '18.19.0',
acorn: '8.10.0',
ada: '2.7.2',
ares: '1.20.1',
base64: '0.5.0',
brotli: '1.0.9',
cjs_module_lexer: '1.2.2',
cldr: '43.1',
icu: '73.2',
llhttp: '6.0.11',
modules: '108',
napi: '9',
nghttp2: '1.57.0',
nghttp3: '0.7.0',
ngtcp2: '0.8.1',
openssl: '3.0.12+quic',
simdutf: '3.2.18',
tz: '2023c',
undici: '5.26.4',
unicode: '15.0',
uv: '1.44.2',
uvwasi: '0.0.19',
v8: '10.2.154.26-node.28',
zlib: '1.2.13.1-motley'
}
I can run npm install
: -
npm install node-red@3.1.0
which does .... something: -
npm ERR! Tracker "idealTree" already exists
npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-12-29T10_22_01_062Z-debug-0.log
with a complete log: -
cat /root/.npm/_logs/2023-12-29T10_22_01_062Z-debug-0.log
0 verbose cli /usr/local/bin/node /usr/local/bin/npm
1 info using npm@10.2.3
2 info using node@v18.19.0
3 timing npm:load:whichnode Completed in 43ms
4 timing config:load:defaults Completed in 66ms
5 timing config:load:file:/usr/local/lib/node_modules/npm/npmrc Completed in 109ms
6 timing config:load:builtin Completed in 122ms
7 timing config:load:cli Completed in 105ms
8 timing config:load:env Completed in 5ms
9 timing config:load:file:/.npmrc Completed in 10ms
10 timing config:load:project Completed in 123ms
11 timing config:load:file:/root/.npmrc Completed in 4ms
12 timing config:load:user Completed in 11ms
13 timing config:load:file:/usr/local/etc/npmrc Completed in 4ms
14 timing config:load:global Completed in 11ms
15 timing config:load:setEnvs Completed in 36ms
16 timing config:load Completed in 509ms
17 timing npm:load:configload Completed in 513ms
18 timing config:load:flatten Completed in 74ms
19 timing npm:load:mkdirpcache Completed in 10ms
20 timing npm:load:mkdirplogs Completed in 5ms
21 verbose title npm install node-red@3.1.0
22 verbose argv "install" "node-red@3.1.0"
23 timing npm:load:setTitle Completed in 29ms
24 timing npm:load:display Completed in 13ms
25 verbose logfile logs-max:10 dir:/root/.npm/_logs/2023-12-29T10_22_01_062Z-
26 verbose logfile /root/.npm/_logs/2023-12-29T10_22_01_062Z-debug-0.log
27 timing npm:load:logFile Completed in 268ms
28 timing npm:load:timers Completed in 2ms
29 timing npm:load:configScope Completed in 1ms
30 timing npm:load Completed in 1457ms
31 timing config:load:flatten Completed in 26ms
32 timing arborist:ctor Completed in 17ms
33 silly logfile done cleaning log files
34 timing arborist:ctor Completed in 5ms
35 timing idealTree:init Completed in 1251ms
36 timing idealTree:userRequests Completed in 54ms
37 silly idealTree buildDeps
38 timing idealTree Completed in 1339ms
39 timing command:install Completed in 3097ms
40 verbose stack Error: Tracker "idealTree" already exists
40 verbose stack at [_onError] (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/tracker.js:100:11)
40 verbose stack at Arborist.addTracker (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/tracker.js:27:21)
40 verbose stack at #buildDeps (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:768:10)
40 verbose stack at Arborist.buildIdealTree (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:196:28)
40 verbose stack at async Promise.all (index 1)
40 verbose stack at async Arborist.reify (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:159:5)
40 verbose stack at async Install.exec (/usr/local/lib/node_modules/npm/lib/commands/install.js:152:5)
40 verbose stack at async module.exports (/usr/local/lib/node_modules/npm/lib/cli-entry.js:61:5)
41 verbose cwd /
42 verbose Linux 5.4.0-169-generic
43 verbose node v18.19.0
44 verbose npm v10.2.3
45 error Tracker "idealTree" already exists
46 verbose exit 1
47 timing npm Completed in 8478ms
48 verbose unfinished npm timer reify 1703845326489
49 verbose unfinished npm timer reify:loadTrees 1703845328079
50 verbose unfinished npm timer idealTree:buildDeps 1703845329421
51 verbose code 1
52 error A complete log of this run can be found in: /root/.npm/_logs/2023-12-29T10_22_01_062Z-debug-0.log
So stuff appears to happen i.e. the proces never hangs for me, but I don't know npm
enough to know what errors such as: -
40 verbose stack Error: Tracker "idealTree" already exists
etc. mean
I also tried the 8.0.6
build of qemu
as per your suggestion: -
docker run --privileged --rm tonistiigi/binfmt:qemu-v8.0.4 --install all
with the same effect: -
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine npm install node-red@3.1.0
npm ERR! Tracker "idealTree" already exists
npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-12-29T10_28_30_672Z-debug-0.log
Not sure what I'm doing wrong ? 🤔
Picking up on the npm ERR! Tracker "idealTree" already exists
message, I Googled and found this: -
npm ERR! Tracker "idealTree" already exists while creating the Docker image for Node project
so did the same as before: -
docker run --platform linux/s390x -it --cap-add=SYS_PTRACE -e QEMU_STRACE=true -e QEMU_LOG_FILENAME=qemu.log -v $(pwd)/qemu.log:/qemu.log --rm node:18-alpine sh
but then created/used a directory, within which I ran the npm install
command: -
mkdir foobar
cd foobar/
npm install node-red@3.1.0
This took longer, with a nice progress bar e.g.
[###############...] \ reify:string_decoder: http fetch GET 200 https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz 42470ms (ca
....
[##################] \ reify:@node-red/editor-client: http fetch GET 200 https://registry.npmjs.org/@node-red/editor-client/-/editor-client-3.1.
but finally ( after ~3 mins ) completed with: -
added 301 packages in 3m
43 packages are looking for funding
run `npm fund` for details
npm notice
npm notice New patch version of npm available! 10.2.3 -> 10.2.5
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.2.5
npm notice Run npm install -g npm@10.2.5 to update!
npm notice
ls -altrc
total 140
drwxr-xr-x 1 root root 59 Dec 29 10:30 ..
-rw-r--r-- 1 root root 53 Dec 29 10:33 package.json
-rw-r--r-- 1 root root 118904 Dec 29 10:33 package-lock.json
drwxr-xr-x 3 root root 87 Dec 29 10:33 .
drwxr-xr-x 257 root root 8192 Dec 29 10:33 node_modules
-rw-r--r-- 1 root root 1112 Dec 29 10:35 qemu.log
npm list
foobar@ /foobar
+-- @mapbox/node-pre-gyp@1.0.11 extraneous
+-- aproba@2.0.0 extraneous
+-- are-we-there-yet@2.0.0 extraneous
+-- color-support@1.1.3 extraneous
+-- console-control-strings@1.1.0 extraneous
+-- delegates@1.0.0 extraneous
+-- detect-libc@2.0.2 extraneous
+-- emoji-regex@8.0.0 extraneous
+-- gauge@3.0.2 extraneous
+-- has-unicode@2.0.1 extraneous
+-- is-fullwidth-code-point@3.0.0 extraneous
+-- make-dir@3.1.0 extraneous
+-- node-addon-api@5.1.0 extraneous
+-- node-fetch@2.7.0 extraneous
+-- node-red@3.1.0
+-- npmlog@5.0.1 extraneous
+-- rimraf@3.0.2 extraneous
+-- set-blocking@2.0.0 extraneous
+-- signal-exit@3.0.7 extraneous
+-- string-width@4.2.3 extraneous
+-- tr46@0.0.3 extraneous
+-- webidl-conversions@3.0.1 extraneous
+-- whatwg-url@5.0.0 extraneous
`-- wide-align@1.1.5 extraneous
Not sure if any of this is of any use @hardillb 😢
OK, this is now failing with armv7 and NodeJS v20 for the new Node-RED 4.0 beta builds
This may be related to https://gitlab.com/qemu-project/qemu/-/issues/2485.
Version
v18.18.2
Platform
Linux c8fdac6bf542 6.4.15-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 7 00:25:01 UTC 2023 s390x Linux
Subsystem
No response
What steps will reproduce the bug?
How often does it reproduce? Is there a required condition?
Every time
What is the expected behavior? Why is that the expected behavior?
npm to complete run on emulated s390x as this is the standard way to build s390x containers when no access to s390x hardware.
What do you see instead?
Hang and 100% cpu from the npm process
Additional information
See https://github.com/nodejs/docker-node/issues/1973 for qemu strace output