Closed magicse closed 3 months ago
Hi @Wetzel402 Have you tried the latest Docker image from dockerhub? Is the problem still there? I'm currently trying to build an official Home Assistant Core docker image without using external wheels and compiling the wheels during the image building process. If everything will works with the built image, then the problem is 100% in the wheels from the repository for armv7
git clone https://github.com/home-assistant/core.git 03.hass-core
cd 03.hass-core
docker buildx build --output type=docker --platform linux/arm/v7 --build-arg BUILD_FROM=ghcr.io/home-assistant/armv7-homeassistant-base:2023.02.0 --build-arg QEMU_CPU=cortex-a15 .
While building, I see a problem with compiling the matplotlib==3.6.1 package which requires the --use-pep517 option to be forced to build. I found a mention of problems with matplotlib==3.6.2 with armv7 in the file core/homeassistant/package_constraints.txt (https://github.com/home-assistant/core/pull/85540#issue-1525955425). But the matplotlib package has long been version "3.7.0" and possibly pinning version 3.6.1 is not needed at all.
Removing site-packagaes folder inside of original Home Assistant and rebuilding packages give fully worked Home Assistant without segfaults and restarting loop.
bash-5.1# rm -rf /usr/local/lib/python3.10/site-packages/*
bash-5.1# apk add g++ gcc make cmake
bash-5.1# apk add cargo
bash-5.1# apk add python3-dev jpeg-dev libffi-dev libftdi1-dev bzip2-dev ffmpeg-dev openssl-dev libxml2-dev libxslt-dev
bash-5.1# python3 -m ensurepip --upgrade
bash-5.1# cd /usr/src/homeassistant
bash-5.1# pip3 install -e .
bash-5.1# python3 -c "import sys; print((sys._base_executable, sys.version))"
bash-5.1# ('/usr/local/bin/python3', '3.10.7 (main, Nov 24 2022, 13:02:43) [GCC 11.2.1 20220219]')
bash-5.1# uname -a
bash-5.1# Linux 0b265030057f 4.2.8 #2 SMP Thu Jan 12 10:44:50 CST 2023 armv7l Linux
bash-5.1# uname -m
bash-5.1# armv7l
bash-5.1# ldd /usr/local/lib/python3.10/site-packages/yarl/_quoting_c.cpython-310-arm-linux-gnueabihf.so
/lib/ld-musl-armhf.so.1 (0x54278000)
libc.musl-armv7.so.1 => /lib/ld-musl-armhf.so.1 (0x54278000)
bash-5.1# readelf -A /usr/local/lib/python3.10/site-packages/yarl/_quoting_c.cpython-310-arm-linux-gnueabihf.so
Attribute Section: aeabi
File Attributes
Tag_CPU_name: "7-A"
Tag_CPU_arch: v7
Tag_CPU_arch_profile: Application
Tag_ARM_ISA_use: Yes
Tag_THUMB_ISA_use: Thumb-2
Tag_FP_arch: VFPv3-D16
Tag_ABI_PCS_wchar_t: 4
Tag_ABI_FP_denormal: Needed
Tag_ABI_FP_exceptions: Needed
Tag_ABI_FP_number_model: IEEE 754
Tag_ABI_align_needed: 8-byte
Tag_ABI_enum_size: int
Tag_ABI_VFP_args: VFP registers
Tag_CPU_unaligned_access: v6
bash-5.1# hass --config /config
Here fragment of log file of strace before SIGFAULT for future check. bash-5.1# strace python -c "import numpy"
open("/usr/local/lib/python3.10/site-packages/numpy/core/__pycache__/multiarray.cpython-310.pyc", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
statx(3, "", AT_STATX_SYNC_AS_STAT|AT_EMPTY_PATH, STATX_BASIC_STATS, 0x7dfed658) = -1 ENOSYS (Function not implemented)
fstat64(3, {st_mode=S_IFREG|0644, st_size=53810, ...}) = 0
ioctl(3, TIOCGWINSZ, 0x7dfed8c0) = -1 ENOTTY (Not a tty)
_llseek(3, 0, [0], SEEK_CUR) = 0
_llseek(3, 0, [0], SEEK_CUR) = 0
statx(3, "", AT_STATX_SYNC_AS_STAT|AT_EMPTY_PATH, STATX_BASIC_STATS, 0x7dfed828) = -1 ENOSYS (Function not implemented)
fstat64(3, {st_mode=S_IFREG|0644, st_size=53810, ...}) = 0
mmap2(NULL, 65536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x75698000
read(3, "o\r\r\n\0\0\0\0\306L\355c\212\330\0\0\343\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53811) = 53810
read(3, "", 1) = 0
close(3) = 0
mmap2(NULL, 32768, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x75778000
mmap2(NULL, 32768, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x75690000
munmap(0x75698000, 65536) = 0
statx(AT_FDCWD, "/usr/local/lib/python3.10/site-packages/numpy/core", AT_STATX_SYNC_AS_STAT, STATX_BASIC_STATS, 0x7dfecae8) = -1 ENOSYS (Function not implemented)
stat64("/usr/local/lib/python3.10/site-packages/numpy/core", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
statx(AT_FDCWD, "/usr/local/lib/python3.10/site-packages/numpy/core/overrides.py", AT_STATX_SYNC_AS_STAT, STATX_BASIC_STATS, 0x7dfec8c8) = -1 ENOSYS (Function not implemented)
stat64("/usr/local/lib/python3.10/site-packages/numpy/core/overrides.py", {st_mode=S_IFREG|0644, st_size=7297, ...}) = 0
statx(AT_FDCWD, "/usr/local/lib/python3.10/site-packages/numpy/core/overrides.py", AT_STATX_SYNC_AS_STAT, STATX_BASIC_STATS, 0x7dfecb78) = -1 ENOSYS (Function not implemented)
stat64("/usr/local/lib/python3.10/site-packages/numpy/core/overrides.py", {st_mode=S_IFREG|0644, st_size=7297, ...}) = 0
open("/usr/local/lib/python3.10/site-packages/numpy/core/__pycache__/overrides.cpython-310.pyc", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
statx(3, "", AT_STATX_SYNC_AS_STAT|AT_EMPTY_PATH, STATX_BASIC_STATS, 0x7dfeca88) = -1 ENOSYS (Function not implemented)
fstat64(3, {st_mode=S_IFREG|0644, st_size=6716, ...}) = 0
ioctl(3, TIOCGWINSZ, 0x7dfeccf0) = -1 ENOTTY (Not a tty)
_llseek(3, 0, [0], SEEK_CUR) = 0
_llseek(3, 0, [0], SEEK_CUR) = 0
statx(3, "", AT_STATX_SYNC_AS_STAT|AT_EMPTY_PATH, STATX_BASIC_STATS, 0x7dfecc58) = -1 ENOSYS (Function not implemented)
fstat64(3, {st_mode=S_IFREG|0644, st_size=6716, ...}) = 0
mmap2(NULL, 32768, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x756a0000
read(3, "o\r\r\n\0\0\0\0\306L\355c\201\34\0\0\343\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 6717) = 6716
read(3, "", 1) = 0
close(3) = 0
munmap(0x756a0000, 32768) = 0
statx(AT_FDCWD, "/usr/local/lib/python3.10/site-packages/numpy/core", AT_STATX_SYNC_AS_STAT, STATX_BASIC_STATS, 0x7dfec2d0) = -1 ENOSYS (Function not implemented)
stat64("/usr/local/lib/python3.10/site-packages/numpy/core", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
statx(AT_FDCWD, "/usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so", AT_STATX_SYNC_AS_STAT, STATX_BASIC_STATS, 0x7dfec0b0) = -1 ENOSYS (Function not implemented)
stat64("/usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so", {st_mode=S_IFREG|0755, st_size=2374876, ...}) = 0
open("/usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
statx(3, "", AT_STATX_SYNC_AS_STAT|AT_EMPTY_PATH, STATX_BASIC_STATS, 0x7dfebc50) = -1 ENOSYS (Function not implemented)
fstat64(3, {st_mode=S_IFREG|0755, st_size=2374876, ...}) = 0
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\0\0\0\0004\0\0\0"..., 936) = 936
mmap2(NULL, 2555904, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) = 0x75420000
mmap2(0x75650000, 229376, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x220000) = 0x75650000
mmap2(0x75670000, 98304, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x75670000
mmap2(0x75680000, 32768, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x238000) = 0x75680000
mmap2(0x75680000, 65536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x240000) = 0x75680000
close(3) = 0
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=NULL} ---
+++ killed by SIGSEGV +++
Segmentation fault
Also ldd out of one of libs of numpy in original image
bash-5.1# ldd /usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so
/lib/ld-musl-armhf.so.1 (0x541e8000)
Bus error
ldd out of Python
bash-5.1# ldd python
/lib/ld-musl-armhf.so.1: cannot load python: No such file or directory
bash-5.1# ldd /usr/local/bin/python
/lib/ld-musl-armhf.so.1 (0x54620000)
libpython3.10.so.1.0 => /usr/local/lib/libpython3.10.so.1.0 (0x75c78000)
libc.musl-armv7.so.1 => /lib/ld-musl-armhf.so.1 (0x54620000)
bash-5.1#
Also debug of import numpy I see problem with lib _multiarrary
Starting program: /usr/local/bin/python
Python 3.10.7 (main, Nov 24 2022, 13:02:43) [GCC 11.2.1 20220219] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Program received signal SIGSEGV, Segmentation fault.
sysv_lookup (s=s@entry=0x75fe97f7 "__libc_start_main", h=h@entry=24641422, dso=dso@entry=0x7dffbdc8) at ldso/dynlink.c:249
249 char *strings = dso->strings;
(gdb) bt
#0 sysv_lookup (s=s@entry=0x75fe97f7 "__libc_start_main", h=h@entry=24641422, dso=dso@entry=0x7dffbdc8) at ldso/dynlink.c:249
#1 0x75fb87e4 in find_sym2 (use_deps=0, need_def=1, s=0x75fe97f7 "__libc_start_main", dso=0x7dffbdc8) at ldso/dynlink.c:313
#2 find_sym (dso=dso@entry=0x7dffbdc8, s=0x75fe97f7 "__libc_start_main", need_def=need_def@entry=1) at ldso/dynlink.c:334
#3 0x75fa6f7c in load_library (name=name@entry=0x759348a0 "/usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so",
needed_by=<optimized out>) at ldso/dynlink.c:1128
#4 0x75fa7f34 in dlopen (file=0x759348a0 "/usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so", mode=2)
at ldso/dynlink.c:2089
#5 0x75e07c54 in ?? () from /usr/local/lib/libpython3.10.so.1.0
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
@magicse, great work. Sounds like HA wheels need to be rebuilt.
@Wetzel402 @Gerigot @boyarale @pvizeli @frenck
Conclusion - the binaries of the libraries from the wheels were compiled for the wrong architecture ARMv8.
readelf command for _multiarray_umath.cpython-310-arm-linux-gnueabihf.so library from original Home Assistant image for armv7 give next file attributes
readelf -A /usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so
Attribute Section: aeabi File Attributes with cpu 8-A and arch v8 with optimization for NEON ))?? What a wonderful tale.....
Tag_CPU_name: "8-A"
Tag_CPU_arch: v8
Tag_CPU_arch_profile: Application
Tag_ARM_ISA_use: Yes
Tag_THUMB_ISA_use: Thumb-2
Tag_FP_arch: FP for ARMv8
Tag_Advanced_SIMD_arch: NEON for ARMv8
Tag_ABI_PCS_wchar_t: 4
Tag_ABI_FP_denormal: Needed
Tag_ABI_FP_exceptions: Needed
Tag_ABI_FP_number_model: IEEE 754
Tag_ABI_align_needed: 8-byte
Tag_ABI_enum_size: int
Tag_ABI_VFP_args: VFP registers
Tag_CPU_unaligned_access: v6
Tag_ABI_FP_16bit_format: IEEE 754
Tag_MPextension_use: Allowed
Tag_Virtualization_use: TrustZone and Virtualization Extensions
For example Python give normal attributes for armv7 readelf -A /usr/local/bin/python3
Attribute Section: aeabi
File Attributes
Tag_CPU_name: "7-A"
Tag_CPU_arch: v7
Tag_CPU_arch_profile: Application
Tag_ARM_ISA_use: Yes
Tag_THUMB_ISA_use: Thumb-2
Tag_FP_arch: VFPv3-D16
Tag_ABI_PCS_wchar_t: 4
Tag_ABI_FP_rounding: Needed
Tag_ABI_FP_denormal: Needed
Tag_ABI_FP_exceptions: Needed
Tag_ABI_FP_number_model: IEEE 754
Tag_ABI_align_needed: 8-byte
Tag_ABI_enum_size: int
Tag_ABI_VFP_args: VFP registers
Tag_ABI_optimization_goals: Aggressive Size
Tag_CPU_unaligned_access: v6
You could try to test worked (rebuilt by yourself) libs and non-worked libs from original Home Assistant image with next commands, and you could see difference in build arch.
Also in original lib _multiarray_umath.cpython-310-arm-linux-gnueabihf.so are 3 libs
Instead of rebuilt is one only
great work. Sounds like HA wheels need to be rebuilt.
Yes sure. Although the image of ARMv7 can be built on the ARMv8 host (in Aarch32 mode) but this is not guaranteed of work if binary will build with features available for ARMv8l but not available in ARMv7.
Tag_CPU_name: "8-A"
/usr/local/lib/python3.10/site-packages/numpy/core/_simd.cpython-310-arm-linux-gnueabihf.so
Tag_CPU_name: "7-A"
/usr/local/lib/python3.10/site-packages/numpy/core/_rational_tests.cpython-310-arm-linux-gnueabihf.so
Tag_CPU_name: "8-A"
/usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_umath.cpython-310-arm-linux-gnueabihf.so
Tag_CPU_name: "7-A"
/usr/local/lib/python3.10/site-packages/numpy/core/_operand_flag_tests.cpython-310-arm-linux-gnueabihf.so
Tag_CPU_name: "7-A"
/usr/local/lib/python3.10/site-packages/numpy/core/_struct_ufunc_tests.cpython-310-arm-linux-gnueabihf.so
Tag_CPU_name: "7-A"
/usr/local/lib/python3.10/site-packages/numpy/core/_multiarray_tests.cpython-310-arm-linux-gnueabihf.so
Tag_CPU_name: "8.2-A"
/usr/local/lib/python3.10/site-packages/numpy/core/_umath_tests.cpyt
Tag_CPU_name: "8-A"
/usr/local/lib/python3.10/site-packages/lxml.libs/libgcrypt-8f7f978d.so.20.4.1
@Wetzel402 @Gerigot @boyarale @pvizeli @frenck
readelf -aW /usr/local/lib/python3.10/site-packages/_brotli.cpython-310-arm-linux-gnueabihf.so
Dynamic section at offset 0xa5dfc contains 28 entries:
Tag Type Name/Value
0x0000000f (RPATH) Library rpath: [$ORIGIN/Brotli.libs] <---- ??????
0x00000001 (NEEDED) Shared library: [libstdc++-ad7c573d.so.6.0.29] <---- ??????
0x00000001 (NEEDED) Shared library: [libgcc_s-8c7760c8.so.1] <---- ??????
0x00000001 (NEEDED) Shared library: [libc.musl-armv7.so.1]
readelf -aW /usr/local/lib/python3.10/site-packages/_brotli.cpython-310-arm-linux-gnueabihf.so
Dynamic section at offset 0xa5dfc contains 28 entries:
Tag Type Name/Value
0x00000001 (NEEDED) Shared library: [libstdc++.so.6]
0x00000001 (NEEDED) Shared library: [libgcc_s.so.1]
0x00000001 (NEEDED) Shared library: [libc.so]
Duplicates binaries of same third libs and not system libstdc++.so.6 libgcc_s.so.1
bash-5.1# find / -name "libstdc++-ad7c573d.so.6.0.29"
/usr/local/lib/python3.10/site-packages/ha_av.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/kiwisolver.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/pyitachip2ir.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/pandas.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/contourpy.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/grpcio.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/matplotlib.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/Brotli.libs/libstdc++-ad7c573d.so.6.0.29
/usr/local/lib/python3.10/site-packages/cchardet.libs/libstdc++-ad7c573d.so.6.0.29
bash-5.1# find / -name "libgcc_s-8c7760c8.so.1"
/usr/local/lib/python3.10/site-packages/ha_av.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/kiwisolver.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/pyitachip2ir.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/pandas.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/contourpy.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/grpcio.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/msgpack.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/matplotlib.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/bcrypt.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/cffi.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/cryptography.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/orjson.libs/libgcc_s-8c7760c8.so.1
/usr/local/lib/python3.10/site-packages/cchardet.libs/libgcc_s-8c7760c8.so.1
bash-5.1#
bash-5.1# find / -name "libstdc++.so.6"
/usr/lib/libstdc++.so.6
bash-5.1#
bash-5.1# find / -name "libgcc_s.so.1"
/usr/lib/libgcc_s.so.1
bash-5.1#
@magicse Please... stop pushing everything. Just like spamming and creating issues all over the place, tagging and mentioning people to get attention is not something that is generally appreciated.
I know you must mean it in a friendly way, but it may come across as demanding and annoying, which has the opposite effect of what you are trying to do.
Please consider this a last warning for that. Thanks 👍
../Frenck
Hello, @frenck I apologize for my activity. It's just that now I see that I've been noticed. Once again I apologize
I see that I've been noticed.
That doesn't give you an advantage, because of how you demand attention. That is the point.
After reinstalling the packages that are linked with the libstdc++-ad7c573d.so.6.0.29 and libgcc_s-8c7760c8.so.1 libraries, we get a packages linked to system libs "libstdc++.so.6" & "libgcc_s.so.1" and working Home Assistant. To avoid overwriting with the wrong package from the repo, unset WHEELS_LINKS
pip install --no-binary :all: --no-cache-dir --force-reinstall numpy==1.23.2 ha_av==10.0.0 \
kiwisolver==1.4.4 pyitachip2ir==0.0.7 pandas==1.4.3 contourpy==1.0.7 \
grpcio==1.51.1 matplotlib==3.6.1 brotli cchardet==2.1.7
pip install --no-binary :all: --no-cache-dir --force-reinstall msgpack==1.0.4 numpy==1.23.2 \
bcrypt==4.0.1 cffi==1.15.1 cryptography==39.0.1 orjson==3.8.6
And the same for packages linked with libraies "libgcrypt-8f7f978d.so.20.4.1" "libcrypto-74f9fe93.so.1.1" "libcrypto-acc7ab14.so.1.1" "libssl-5abe59d2.so.1.1"
pip install --no-cache-dir --force-reinstall pyyaml==6.0 lxml==4.9.1 mysqlclient==2.1.1 psycopg2==2.9.5 uamqp==1.6.0
Elvis has left the building....
Further exploration of the problem with libs which produce SIGSEGV for example from package cchardet
GNU gdb (GDB) 11.2
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "armv7-alpine-linux-musleabihf".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...
(No debugging symbols found in python)
(gdb) r
Starting program: /usr/local/bin/python -c import\ cchardet
Program received signal SIGSEGV, Segmentation fault.
0x75fb86c2 in gnu_lookup_filtered (h1=h1@entry=4131212846, hashtab=0x759ae1ec, dso=dso@entry=0x7dffdd90, s=s@entry=0x75fe97f7 "__libc_start_main", fofs=fofs@entry=129100401, fmask=fmask@entry=16384) at ldso/dynlink.c:282
282 size_t f = bloomwords[fofs & (hashtab[2]-1)];
(gdb) bt
#0 0x75fb86c2 in gnu_lookup_filtered (h1=h1@entry=4131212846, hashtab=0x759ae1ec, dso=dso@entry=0x7dffdd90, s=s@entry=0x75fe97f7 "__libc_start_main",
fofs=fofs@entry=129100401, fmask=fmask@entry=16384) at ldso/dynlink.c:282
#1 0x75fb87a8 in find_sym2 (use_deps=0, need_def=1, s=0x75fe97f7 "__libc_start_main", dso=0x7dffdd90) at ldso/dynlink.c:310
#2 find_sym (dso=dso@entry=0x7dffdd90, s=0x75fe97f7 "__libc_start_main", need_def=need_def@entry=1) at ldso/dynlink.c:334
#3 0x75fa6f7c in load_library (name=0x75a63f8a "libstdc++-ad7c573d.so.6.0.29", needed_by=needed_by@entry=0x75ff1c40) at ldso/dynlink.c:1128
#4 0x75fa72fa in load_direct_deps (p=0x75ff1c40) at ldso/dynlink.c:1221
#5 load_deps (p=0x75ff1c40) at ldso/dynlink.c:1238
#6 load_deps (p=<optimized out>) at ldso/dynlink.c:1234
#7 0x75fa8050 in dlopen (file=0x75a981f0 "/usr/local/lib/python3.10/site-packages/cchardet/_cchardet.cpython-310-arm-linux-gnueabihf.so", mode=2) at ldso/dynlink.c:2100
#8 0x75e07c54 in ?? () from /usr/local/lib/libpython3.10.so.1.0
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)
The error message suggests that there might be an issue with loading a library (libstdc++-ad7c573d.so.6.0.29).
bash-5.1# ldd _cchardet.cpython-310-arm-linux-gnueabihf.so
/lib/ld-musl-armhf.so.1 (0x54250000)
bash-5.1#
Based on the output of the gdb command and the ldd command, it seems that there may be an issue with loading the required shared libraries for the _cchardet.cpython-310-arm-linux-gnueabihf.so file. The ldd command shows that the dynamic linker is /lib/ld-musl-armhf.so.1, which is the linker for the musl C library. However, the gdb output shows a segmentation fault occurring in the GNU C library's ldso code.
ldd output for libstdc++-ad7c573d.so.6.0.29
bash-5.1# ls
libgcc_s-8c7760c8.so.1 libstdc++-ad7c573d.so.6.0.29
bash-5.1# ldd libstdc++-ad7c573d.so.6.0.29
/lib/ld-musl-armhf.so.1 (0x540a8000)
Error loading shared library : Invalid argument (needed by libstdc++-ad7c573d.so.6.0.29)
Error loading shared library error_code: No such file or directory (needed by libstdc++-ad7c573d.so.6.0.29)
Segmentation fault
And this is output of ldd for rebuilt lib _cchardet.cpython-310-arm-linux-gnueabihf.so
bash-5.1# ldd _cchardet.cpython-310-arm-linux-gnueabihf.so
/lib/ld-musl-armhf.so.1 (0x54660000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x759a0000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x75980000)
libc.so => /lib/ld-musl-armhf.so.1 (0x54660000)
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyUnicode_FromFormat: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyDict_SetItemString: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyDict_Size: symbol not found
....
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: _Py_TrueStruct: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyBaseObject_Type: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyExc_ImportError: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyExc_AttributeError: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyExc_NameError: symbol not found
bash-5.1#
For the same image of Home assistant "docker run homeassistant/home-assistant:latest" ldd on Quemu armv7 for libs of package cchardet (I take for exmple)
bash-5.1# cd /usr/local/lib/python3.10/site-packages/cchardet.libs/
bash-5.1# ls
libgcc_s-8c7760c8.so.1 libstdc++-ad7c573d.so.6.0.29
bash-5.1#
bash-5.1# ldd libstdc++-ad7c573d.so.6.0.29
/lib/ld-musl-armhf.so.1 (0x40000000)
libc.musl-armv7.so.1 => /lib/ld-musl-armhf.so.1 (0x40000000)
Error loading shared library libgcc_s-8c7760c8.so.1: No such file or directory (needed by libstdc++-ad7c573d.so.6.0.29)
Error relocating libstdc++-ad7c573d.so.6.0.29: __aeabi_uidiv: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_GetRegionStart: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_GetTextRelBase: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_RaiseException: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __aeabi_idivmod: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_Resume_or_Rethrow: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __ctzdi2: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_GetLanguageSpecificData: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_VRS_Get: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __gnu_unwind_frame: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __aeabi_idiv: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __aeabi_ldivmod: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __aeabi_l2d: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __aeabi_uidivmod: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_Complete: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __aeabi_uldivmod: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_GetDataRelBase: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_VRS_Set: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: __popcountsi2: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_DeleteException: symbol not found
Error relocating libstdc++-ad7c573d.so.6.0.29: _Unwind_Resume: symbol not found
bash-5.1# ldd libgcc_s-8c7760c8.so.1 lib
/lib/ld-musl-armhf.so.1 (0x40000000)
libc.musl-armv7.so.1 => /lib/ld-musl-armhf.so.1 (0x40000000)
bash-5.1#
ldd on QNAP armv7 for libs of package cchardet (I take for exmple)
bash-5.1# cd /usr/local/lib/python3.10/site-packages/cchardet.libs/
bash-5.1# ls
libgcc_s-8c7760c8.so.1 libstdc++-ad7c573d.so.6.0.29
bash-5.1#
bash-5.1# ldd libstdc++-ad7c573d.so.6.0.29
/lib/ld-musl-armhf.so.1 (0x54618000)
Error loading shared library : Invalid argument (needed by libstdc++-ad7c573d.so.6.0.29)
Error loading shared library error_code: No such file or directory (needed by libstdc++-ad7c573d.so.6.0.29)
Segmentation fault
bash-5.1#
bash-5.1# ldd libgcc_s-8c7760c8.so.1
/lib/ld-musl-armhf.so.1 (0x547e8000)
Error loading shared library : Invalid argument (needed by libgcc_s-8c7760c8.so.1)
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
Error relocating libgcc_s-8c7760c8.so.1: : symbol not found
bash-5.1#
To eliminate Qemu's influence on the wheel assembly process, I decided to build the library in Qemu ARMv7 and Ubuntu64 host machine with original Home Assistant armv7 image. For tests, I used one of the python packages "сchardet" installed with Home Assistant image and which produced SEG FAULT errors on QNAP but has a small size and has no dependencies with libs from other packages.
SEG FAULT on QNAP ARMv7 with original lib _cchardet.cpython-310-arm-linux-gnueabihf.so linked to libstdc++-ad7c573d.so.6.0.29 form package "сchardet" of Home Assistant wheels repo
bash-5.1# python
Python 3.10.10 (main, Feb 14 2023, 15:33:41) [GCC 11.2.1 20220219] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cchardet
Segmentation fault
bash-5.1#
Bulding lib on host Ubuntu64 and Qemu ARMv7
bash-5.1# wget https://files.pythonhosted.org/packages/a8/5d/090c9f0312b7988a9433246c9cf0b566b1ae1374368cfb8ac897218a4f65/cchardet-2.1.7.tar.gz
bash-5.1# tar -xf cchardet-2.1.7.tar.gz
bash-5.1# cd cchardet-2.1.7
bash-5.1# python3 setup.py sdist bdist_wheel
After compiling, I got the compiled library _cchardet.cpython-310-arm-linux-gnueabihf.so in Qemu. After that, I copied it into QNAP in to /usr/local/lib/python3.10/site-packages/cchardet and overwrote the old non-working library. And the test passed without any errors.
bash-5.1# python
Python 3.10.10 (main, Feb 14 2023, 15:33:41) [GCC 11.2.1 20220219] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cchardet
>>>
Hence the conclusion that the problem is not related to the compilation in the Qemu emulator. And most likely it is located on the side of the wheel builders somewhere inside those assembly processes.
Ldd output of _cchardet.cpython-310-arm-linux-gnueabihf.so
ldd _cchardet.cpython-310-arm-linux-gnueabihf.so
/lib/ld-musl-armhf.so.1 (0x546b8000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x75e58000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x75e38000)
libc.so => /lib/ld-musl-armhf.so.1 (0x546b8000)
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyUnicode_FromFormat: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyDict_SetItemString: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyDict_Size: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: _PyThreadState_UncheckedGet: symbol not found
.......
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyExc_ImportError: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyExc_AttributeError: symbol not found
Error relocating _cchardet.cpython-310-arm-linux-gnueabihf.so: PyExc_NameError: symbol not found
strace log when importing the library libstdc++-ad7c573d.so.6.0.29,
bash-5.1# strace ldd /usr/local/lib/python3.10/site-packages/cchardet.libs/libstdc++-ad7c573d.so.6.0.29
execve("/usr/bin/ldd", ["ldd", "/usr/local/lib/python3.10/site-p"...], 0x7df97cf4 /* 16 vars */) = 0
set_tls(0x75c42400) = 0
set_tid_address(0x75c417cc) = 4824
brk(NULL) = 0x55570000
brk(0x55580000) = 0x55580000
mmap2(0x55570000, 32768, PROT_NONE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x55570000
getuid32() = 0
getpid() = 4824
rt_sigprocmask(SIG_UNBLOCK, [RT_1 RT_2], NULL, 8) = 0
rt_sigaction(SIGCHLD, {sa_handler=0x54807c11, sa_mask=~[RTMIN RT_1 RT_2], sa_flags=SA_RESTORER, sa_restorer=0x75bf81fb}, NULL, 8) = 0
getppid() = 4821
statx(AT_FDCWD, "/config", AT_STATX_SYNC_AS_STAT, STATX_BASIC_STATS, 0x7da27698) = -1 ENOSYS (Function not implemented)
stat64("/config", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0
statx(AT_FDCWD, ".", AT_STATX_SYNC_AS_STAT, STATX_BASIC_STATS, 0x7da27698) = -1 ENOSYS (Function not implemented)
stat64(".", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0
open("/usr/bin/ldd", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
fcntl64(3, F_DUPFD_CLOEXEC, 10) = 10
fcntl64(10, F_SETFD, FD_CLOEXEC) = 0
close(3) = 0
rt_sigaction(SIGINT, NULL, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
rt_sigaction(SIGINT, {sa_handler=0x54807c11, sa_mask=~[RTMIN RT_1 RT_2], sa_flags=SA_RESTORER, sa_restorer=0x75bf81fb}, NULL, 8) = 0
rt_sigaction(SIGQUIT, NULL, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
rt_sigaction(SIGQUIT, {sa_handler=SIG_IGN, sa_mask=~[RTMIN RT_1 RT_2], sa_flags=SA_RESTORER, sa_restorer=0x75bf81fb}, NULL, 8) = 0
rt_sigaction(SIGTERM, NULL, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
read(10, "#!/bin/sh\nexec /lib/ld-musl-armh"..., 2047) = 51
rt_sigaction(SIGQUIT, {sa_handler=SIG_DFL, sa_mask=~[RTMIN RT_1 RT_2], sa_flags=SA_RESTORER, sa_restorer=0x75bf81fb}, NULL, 8) = 0
execve("/lib/ld-musl-armhf.so.1", ["/lib/ld-musl-armhf.so.1", "--list", "/usr/local/lib/python3.10/site-p"...], 0x75c40414 /* 16 vars */) = 0
set_tls(0x54672400) = 0
set_tid_address(0x546717cc) = 4824
open("/usr/local/lib/python3.10/site-packages/cchardet.libs/libstdc++-ad7c573d.so.6.0.29", O_RDONLY|O_LARGEFILE) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\0\0\0\0004\0\0\0"..., 936) = 936
mmap2(NULL, 2228224, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) = 0x757f8000
mmap2(0x75948000, 65536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x140000) = 0x75948000
mmap2(0x75950000, 458752, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x148000) = 0x75950000
mmap2(0x759b8000, 393216, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x1a8000) = 0x759b8000
close(3) = 0
writev(1, [{iov_base="\t/lib/ld-musl-armhf.so.1 (0x545e"..., iov_len=38}, {iov_base=NULL, iov_len=0}], 2 /lib/ld-musl-armhf.so.1 (0x545e8000)
) = 38
brk(NULL) = 0x54a98000
brk(0x54aa8000) = 0x54aa8000
mmap2(0x54a98000, 32768, PROT_NONE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x54a98000
writev(2, [{iov_base="Error loading shared library : I"..., iov_len=59}, {iov_base="/usr/local/lib/python3.10/site-p"..., iov_len=82}], 2Error loading shared library : Invalid argument (needed by /usr/local/lib/python3.10/site-packages/cchardet.libs/libstdc++-ad7c573d.so.6.0.29) = 141
writev(2, [{iov_base=")", iov_len=1}, {iov_base=NULL, iov_len=0}], 2)) = 1
writev(2, [{iov_base="\n", iov_len=1}, {iov_base=NULL, iov_len=0}], 2
) = 1
open("/etc/ld-musl-armhf.path", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/lib/error_code", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/local/lib/error_code", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/error_code", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory)
writev(2, [{iov_base="Error loading shared library err"..., iov_len=78}, {iov_base="/usr/local/lib/python3.10/site-p"..., iov_len=82}], 2Error loading shared library error_code: No such file or directory (needed by /usr/local/lib/python3.10/site-packages/cchardet.libs/libstdc++-ad7c573d.so.6.0.29) = 160
writev(2, [{iov_base=")", iov_len=1}, {iov_base=NULL, iov_len=0}], 2)) = 1
writev(2, [{iov_base="\n", iov_len=1}, {iov_base=NULL, iov_len=0}], 2
) = 1
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x75fae200} ---
+++ killed by SIGSEGV +++
Segmentation fault
@Wetzel402 @Gerigot
apk add gcc g++ make cmake
Downloading last patchelf, because auditwheel require version of patchelf >=0.14 but Alpine 3.16 has patchelf package 0.12
mkdir /home/patchelf
cd /home/patchelf/
wget https://github.com/NixOS/patchelf/releases/download/0.17.2/patchelf-0.17.2-armv7l.tar.gz
tar -xf patchelf-0.17.2-armv7l.tar.gz
cp -r /home/patchelf/bin/* /usr/bin/ && cp -r /home/patchelf/share/* /usr/share/
bash-5.1# patchelf --version
patchelf 0.17.2
bash-5.1#
Installing auditwhee
pip install auditwhee
Building cchardet wheel.
cd /home
wget https://files.pythonhosted.org/packages/a8/5d/090c9f0312b7988a9433246c9cf0b566b1ae1374368cfb8ac897218a4f65/cchardet-2.1.7.tar.gz
tar -xf cchardet-2.1.7.tar.gz
cd cchardet-2.1.7
python3 setup.py sdist bdist_wheel
Re-paring built wheel
auditwheel repair dist/cchardet-2.1.7-cp310-cp310-linux_armv7l.whl -w wheel_repaired/
INFO:auditwheel.main_repair:Repairing cchardet-2.1.7-cp310-cp310-linux_armv7l.whl
INFO:auditwheel.wheeltools:Previous filename tags: linux_armv7l
INFO:auditwheel.wheeltools:New filename tags: musllinux_1_2_armv7l
INFO:auditwheel.wheeltools:Previous WHEEL info tags: cp310-cp310-linux_armv7l
INFO:auditwheel.wheeltools:New WHEEL info tags: cp310-cp310-musllinux_1_2_armv7l
INFO:auditwheel.main_repair:
Fixed-up wheel written to /home/cchardet-2.1.7/wheel_repaired/cchardet-2.1.7-cp310-cp310-musllinux_1_2_armv7l.whl
bash-5.1#
And we got fully built wheel of package "cchardet" with dependent system libraries patched by patchelf inside.
cd wheel_repaired/
unzip -l cchardet-2.1.7-cp310-cp310-musllinux_1_2_armv7l.whl
Archive: cchardet-2.1.7-cp310-cp310-musllinux_1_2_armv7l.whl
Length Date Time Name
--------- ---------- ----- ----
0 03-05-2023 14:16 cchardet/
0 03-05-2023 14:16 cchardet-2.1.7.dist-info/
0 03-05-2023 14:16 cchardet.libs/
0 03-05-2023 14:16 cchardet-2.1.7.data/
22 03-05-2023 14:16 cchardet/version.py
1161 03-05-2023 14:16 cchardet/__init__.py
233409 03-05-2023 14:16 cchardet/_cchardet.cpython-310-arm-linux-gnueabihf.so
113 03-05-2023 14:16 cchardet-2.1.7.dist-info/WHEEL
9 03-05-2023 14:16 cchardet-2.1.7.dist-info/top_level.txt
958 03-05-2023 14:16 cchardet-2.1.7.dist-info/RECORD
7674 03-05-2023 14:16 cchardet-2.1.7.dist-info/METADATA
70517 03-05-2023 14:16 cchardet-2.1.7.dist-info/COPYING
2138625 03-05-2023 14:16 cchardet.libs/libstdc++-ad7c573d.so.6.0.29
41309 03-05-2023 14:16 cchardet.libs/libgcc_s-8c7760c8.so.1
0 03-05-2023 14:16 cchardet-2.1.7.data/scripts/
1281 03-05-2023 14:16 cchardet-2.1.7.data/scripts/cchardetect
-------- -------
2495078 16 files
Now we copy this package to QNAP and try to install it with pip and check if it works. Check cchardet before instalation
bash-5.1# python -c "import cchardet"
Segmentation fault
bash-5.1#
Installing new own build of cchardet.
cd /home
bash-5.1# ls
cchardet-2.1.7-cp310-cp310-musllinux_1_2_armv7l.whl
pip install --force-reinstall cchardet-2.1.7-cp310-cp310-musllinux_1_2_armv7l.whl
Processing ./cchardet-2.1.7-cp310-cp310-musllinux_1_2_armv7l.whl
Installing collected packages: cchardet
Attempting uninstall: cchardet
Found existing installation: cchardet 2.1.7
Uninstalling cchardet-2.1.7:
Successfully uninstalled cchardet-2.1.7
Successfully installed cchardet-2.1.7
Testing of new built package cchardet
bash-5.1# python -c "import cchardet"
Segmentation fault
bash-5.1#
And fiasco. We get the same SEGFAULT.
As shown in the posts above without auditwheel we get fully worked packages.
Conclusion The problem is related to the incorrect operation of auditwheel and patchelf. which most likely do not correctly transfer and patch system libraries (libstdc++ libgcc_s) with elf dependencies on ARMv7.
@Wetzel402 @lswysocki @boyarale @Gerigot @pvizeli
Finally problem resolved with patch of incorrect work of patchelf with auditwheel. https://github.com/NixOS/patchelf/issues/474 Dear developers of Home Assistant please , if this possible update the image and wheels for ARMv7 with new patchelf. Many thanks to everyone who helped me.
@magicse I now have warned you a couple of times, even in this issue thread. Now, this is the final warning. If I find you again pinging and pulling in people to issues, PRs, or anything else; concerning people not directly involved and interacting with the thread; I will ban you from interacting with this organization.
This is your last and final warning.
../Frenck
hi Frenck , thank you for understanding.
Thank me for understanding? Your last comment makes no sense to me.
Did you even get what I have been saying?
Did you even get what I have been saying?
Yes sure. There is a war in my country, so I'm probably too emotional.
Sorry to hear that, however, not related and not justifying any of your actions above.
../Frenck
I'll try not to do that again. I was in a hurry to solve the problem because every two hours we had a power outage for 4-6 hours. Because our energy system has been damaged.
Hi frenk, patch of "patchelf" accepted by https://github.com/NixOS/patchelf/issues/474#issuecomment-1464988318. After rebuilding the wheels and the image of the Home Assistant with this new "patchelf" (need to build from source) , everything will work. Pvizeli is aware of this issue and suggested this https://github.com/NixOS/patchelf/issues/474#issuecomment-1455803962
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest Home Assistant version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
Still didn't work. Problem the same as above https://github.com/NixOS/patchelf/issues/474#issuecomment-1464988318 Until a patched patchelf is applied to assemble the wheels, the problem will remain.
@magicse, Were you able to build a working image? I've been running the image provided by linuxserver.io which did not have the problem but they are now ending life for armhf
so the last version available is 2023.4.4.
Interestingly it appears as though 2023.2.x is the last official version to do the looping Home Assistant Core finish process exit code 256 Home Assistant Core finish process received signal 11
. Versions 2023.3+ seem to completely crash the container.
I would like to reiterate for everyone here there is clearly something wrong with the way the official container is built for our architecture because the linuxserver.io container works just fine. I am currently back on their 2023.4.4 container.
For clarity, I am running a QNAP TS-431XeU which has a Alpine AL314 Quad-core ARM Cortex-A15 CPU.
@magicse, Were you able to build a working image? I've been running the image provided by linuxserver.io which did not have the problem but they are now ending life for
armhf
so the last version available is 2023.4.4.Interestingly it appears as though 2023.2.x is the last official version to do the looping
Home Assistant Core finish process exit code 256 Home Assistant Core finish process received signal 11
. Versions 2023.3+ seem to completely crash the container.I would like to reiterate for everyone here there is clearly something wrong with the way the official container is built for our architecture because the linuxserver.io container works just fine. I am currently back on their 2023.4.4 container.
For clarity, I am running a QNAP TS-431XeU which has a Alpine AL314 Quad-core ARM Cortex-A15 CPU.
Do not use native whl from home assistan. They are not properly compiled for Alpine AL314 Quad-core due to patchelf issue with auditwheel https://github.com/NixOS/patchelf/issues/474. If you do the assembly yourself, then everything will work. https://github.com/home-assistant/core/issues/86589#issuecomment-1438043758
Hi there,
I got the same problem with my Qnap Ts431P3.
Any solution suggested?
@Magicse: Could we do it on our own? Could you give instructions? Never did this before...
Hi there,
I got the same problem with my Qnap Ts431P3.
Any solution suggested?
@magicse: Could we do it on our own? Could you give instructions? Never did this before...
I think the best variant for now this 1 https://github.com/home-assistant/core/issues/86589#issuecomment-1436089123 2 https://github.com/home-assistant/core/issues/86589#issuecomment-1436499486
Download Alpina 3.16 or 3.17 official minimal clean docker image (only 5mb in size) https://hub.docker.com/_/alpine And install everything in it from scratch Python 3.10.10, g++, gcc and homeassistant with pip install homeassistant command.
apk add bash
bash
bash-5.1# apk add g++ gcc make
bash-5.1# apk add libcap libpcap-dev
bash-5.1# apk add python3
bash-5.1# python3 -m ensurepip --upgrade
bash-5.1# apk add python3-dev libffi-dev libftdi1-dev bzip2-dev openssl-dev cargo
bash-5.1# pip3 install aiohttp
bash-5.1# pip3 install ffmpeg
bash-5.1# pip3 install libpcap
bash-5.1# pip3 install tzdata
bash-5.1# pip3 install PyNaCl
bash-5.1# pip3 install homeassistant
bash-5.1# hass --config /config
Thx I will try it ;)
I don't know what developers are doing with their container. But now the new home assistant container image throws error 139 even after I change the entrypoint to /bin/ash. Most likely, the binaries are assembled in QEMU without taking into account the subtleties of the Alpine AL314 Quad-core armv7 hardware architecture. If you see "exit code = 135" or "exit code = 139", then your devices architecture is not supported.
RUN |6 BUILD_ARCH=armv7 QEMU_CPU= SSOCR_VERSION=2.22.1 LIBCEC_VERSION=6.0.2 QEMU_CPU= ??
ExitCode:139
Try instruction above
Will try coming days, sorry I am busy
Got my own Container now running. It was a lot of work. Thx to Magicse ;)
@frenck: is there any chance to get this fixed by the team in future? Topic open since January
I was able to use @magicse instructions to get an updated instance of HA running on my QNAP NAS also. This proves there is an issue with the official HA builds that needs to be addressed as we were suspecting. I hope the issue is addressed soon. This work around makes updating far more complex.
Edit: I've been struggling to use this method with my production HA docker compose. The closest I get is the log looping a message about receiving traffic from a reverse proxy but my HTTP config not being set up for it. As far as I can tell this means it must be using a default config rather than my production config but I can't figure out why. Any tips or suggests would be appreciated.
@Wetzel402 QNAP updated its native reverse proxy and now there is no need to install docker nginx. You can use the native one from QNAP by automatically pulling up the certificate.
@frenck: is there any chance to get this fixed by the team in future? Topic open since January
That will not get addressed or any attention from our team. We suggest using our VM, we not going to invest effort for container installation. You have to do all this workaround forever or someone doing it and share it.
Problem still with auditwheel + patchelf
@frenck: is there any chance to get this fixed by the team in future? Topic open since January
That will not get addressed or any attention from our team. We suggest using our VM, we not going to invest effort for container installation. You have to do all this workaround forever or someone doing it and share it.
Is the team depreciating container installation?
My model of QNAP NAS does not support VM install plus, since the VM still uses containers I would be concerned the same problem might still arise on my hardware. If I'm not mistaken this issue also effects some single board computers such as early RPi.
Edit: I do understand we are a very limited subset of users. If I have a little guidance I will gladly make a pull request to fix it.
Edit2: @magicse, you seem to understand the issue. Can you make a pull request to fix it?
@magicse, I found that it appears you helped address the issue with patchelf and the PR was merged March 9th. Is there a separate issue with auditwheels you identified that also needs to be addressed in HA?
I see at one time HA wheels build patchelf from source but now it just installs via apk add
. I wonder if the latest build is not being used.
I found a working Docker at Dockerhub but did not test it until now:
https://hub.docker.com/r/jkilloy/homeassistantnew
Perhaps helps somebody...
I found a working Docker at Dockerhub but did not test it until now:
https://hub.docker.com/r/jkilloy/homeassistantnew
Perhaps helps somebody...
I was trying to see if this user had a github but I'm not finding it. I would be useful to see what changes they made. Also since I'm not finding source code I'm a little leery to use the image.
Hello, I was testing different images and tags from the official armhf for a QNAP TS-231P3 (Alpine AL314 armv7) and I reached the exit code 139. So definetly the official images are broken for that architectures. Now I'm surviving with a linuxserver image, but I think that for future releases I will need use the snipped on: https://github.com/home-assistant/core/issues/86589#issuecomment-1673510507
Thank you for the thread and the help
I think the best variant for now this
Download Alpina 3.16 or 3.17 official minimal clean docker image (only 5mb in size) https://hub.docker.com/_/alpine
And install everything in it from scratch
Python 3.10.10, g++, gcc
and homeassistant with pip install homeassistant command.
Also added this pip3 install git+https://github.com/boto/botocore
<- due error https://github.com/home-assistant/core/issues/95192
apk add bash
bash
bash-5.1# apk add g++ gcc make
bash-5.1# apk add libcap libpcap-dev
bash-5.1# apk add python3
bash-5.1# python3 -m ensurepip --upgrade
bash-5.1# apk add git
bash-5.1# apk add python3-dev libffi-dev libftdi1-dev bzip2-dev openssl-dev cargo jpeg-dev zlib-dev
bash-5.1# pip3 install aiohttp
bash-5.1# pip3 install ffmpeg
bash-5.1# pip3 install libpcap
bash-5.1# pip3 install tzdata
bash-5.1# pip3 install PyNaCl
bash-5.1# pip3 install Pillow
bash-5.1# pip3 install git+https://github.com/boto/botocore
bash-5.1# pip3 install homeassistant
bash-5.1# hass --config /config
Version 2022.7+ currently boot loop on QNAP NAS ts-231p
What version of Home Assistant Core has the issue? 2022.7+
What was the last working version of Home Assistant Core? 2022.6.7
What type of installation are you running? Home Assistant Container
Additional information Running on QNAP NAS (TS-231P).
Processor: Annapurna Labs Alpine AL314 Quad-core ARM Cortex-A15 CPU @ 1.70GH