Open Aquive opened 1 year ago
I had a quick look into this and this is caused by the screen
binary on the docker image. Since every service is running within the a screen now the whole image has become unusable.
root@898c472e8bf8 / # file $(which screen)
/usr/bin/screen: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=daaf208bfacdbc92bf02a36339a191f9aeded05a, stripped
root@898c472e8bf8 / # screen --help
In this case the screen command just runs for a really long time with the following error being spammed over and over:
close(1073182801) = -1 EBADF (Bad file descriptor)
It does appear that after 1-2 minutes screen does return a response? Something is wrong with the screen
binary and it seems it is a custom made version from Hypernode themself.
Hi @AngelsDustz and @Aquive, the screen version in the container is the default Buster screen: https://packages.debian.org/buster/screen, we don't do anything special with that. I've just pulled the latest version of the hypernode-docker and I can not reproduce what you guys are seeing.
Also the information you provide doesn't really show what exactly is not working. Once the container is started and you run screen -x
you don't see a list of screens like this?
If not, what happens when you run bash -x /etc/my_init.d/60_restart_services.sh
?
And once that has run, do you not see processes running if you run ps auxf
? You are running this on Windows right? I don't really have experience with that but perhaps you could give it a try on a Linux or Mac PC to rule out it's something to your local setup.
@vdloo thanks for your reply. I am on Manjaro Linux. I think something went being incompatible after a system upgrade. I am not sure which OS @AngelsDustz is on. But it seems strange we encounter the same problem.
I cannot pinpoint what it is. And I don't know how to get to the source of the problem. It seems the services inside the container do not start somehow. The web server is not starting, I cannot open any webpages from the container on my host. Besides that, I cannot SSH into the container. Probably because the SSH service also didn't start.
Since I cannot SSH into the container, I used docker exec to execute what you asked for.
This just hangs:
aq@aq-xps ~/Boxes> docker exec -it 907c2a103108 /bin/bash -c "screen -x"
This hangs after a couple of line of output...
aq@aq-xps ~/Boxes> docker exec -it 907c2a103108 /bin/bash -c "/etc/my_init.d/60_restart_services.sh"
Killing old Hypernode services
Killing any old NGINX service
Killing any old Redis service
Killing any old PHP-FPM service
Killing any old MySQL service
Killing any old Mailhog service
Killing any old nginx-config-reloader service
Killing any old Varnish service
Killing any old ElasticSearch service
Giving any old services 5 seconds to stop..
Output of ps auxf. I am not sure how to read this...
aq@aq-xps ~/Boxes> docker exec -it 907c2a103108 /bin/bash -c "ps auxf"
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 218 0.0 0.0 10568 3080 pts/1 Rs+ 08:37 0:00 ps auxf
root 163 99.7 0.0 5436 1028 pts/0 Rs+ 08:28 8:48 screen -x
root 1 0.0 0.0 16128 9792 ? Ss 08:25 0:00 /usr/bin/python3 -u /sbin/my_init
root 127 0.0 0.0 43496 6948 ? Ss 08:25 0:00 /usr/lib/postfix/sbin/master
postfix 132 0.0 0.0 43836 7116 ? S 08:25 0:00 \_ pickup -l -t unix -u -c
postfix 134 0.0 0.0 43888 7148 ? S 08:25 0:00 \_ qmgr -l -t unix -u
root 128 0.0 0.0 6664 3188 ? S 08:25 0:00 /bin/bash /etc/my_init.d/60_restart_services.sh
root 162 99.8 0.0 5436 1028 ? R 08:26 11:16 \_ screen -wipe
I just switched back from Kernel 5.10 to 5.4 as a test. And now it is working again as expected. Weird stuff...
Hey @vdloo screen
just hangs for me, it won't respond. Running the restart_services hangs after the "Giving any service 5 seconds to stop..".
I am running Arch Linux:
Linux arch 6.3.1-arch1-1 #1 SMP PREEMPT_DYNAMIC Mon, 01 May 2023 17:42:39 +0000 x86_64 GNU/Linux
Docker version 23.0.5, build bc4487a59e
ps auxf output:
root@e827aef546f2 / # ps auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 171 0.0 0.0 6928 3584 pts/0 Ss 10:04 0:00 bash
root 180 0.0 0.0 10568 3072 pts/0 R+ 10:04 0:00 \_ ps auxf
root 1 0.2 0.0 16128 9384 ? Ss 10:04 0:00 /usr/bin/pyth
root 135 0.0 0.0 43492 6912 ? Ss 10:04 0:00 /usr/lib/post
postfix 140 0.0 0.0 43836 7168 ? S 10:04 0:00 \_ pickup -l
postfix 141 0.0 0.0 43888 7296 ? S 10:04 0:00 \_ qmgr -l -
root 136 0.0 0.0 6664 3200 ? S 10:04 0:00 /bin/bash /et
root 170 97.9 0.0 5436 1536 ? R 10:04 0:20 \_ screen -w
@AngelsDustz which kernel version are you on? Did you see my comment? https://github.com/ByteInternet/hypernode-docker/issues/84#issuecomment-1554288069
Yes, I am on 6.3.1
yeah I can reproduce this on archlinux on a 6.x kernel indeed:
it's doing this rapidly and forever:
close(157097801) = -1 EBADF (Bad file descriptor)
close(157097800) = -1 EBADF (Bad file descriptor)
close(157097799) = -1 EBADF (Bad file descriptor)
close(157097798) = -1 EBADF (Bad file descriptor)
close(157097797) = -1 EBADF (Bad file descriptor)
close(157097796) = -1 EBADF (Bad file descriptor)
close(157097795) = -1 EBADF (Bad file descriptor)
close(157097794) = -1 EBADF (Bad file descriptor)
close(157097793) = -1 EBADF (Bad file descriptor)
close(157097792) = -1 EBADF (Bad file descriptor)
close(157097791) = -1 EBADF (Bad file descriptor)
close(157097790) = -1 EBADF (Bad file descriptor)
close(157097789^C) = -1 EBADF (Bad file descriptor)
strace: Process 170126 detached
[root@desktop vdloo]# strace -p 170126 -s 9999 -f
and can be reproduced by just running screen in the container:
root@8d262a1ee361 / # strace screen -S test
execve("/usr/bin/screen", ["screen", "-S", "test"], 0x7ffe7be5a580 /* 15 vars */) = 0
brk(NULL) = 0x5649a37cd000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=45519, ...}) = 0
mmap(NULL, 45519, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f8af60f5000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libtinfo.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\351\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=183528, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8af60f3000
mmap(NULL, 186752, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af60c5000
mmap(0x7f8af60d3000, 57344, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe000) = 0x7f8af60d3000
mmap(0x7f8af60e1000, 53248, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1c000) = 0x7f8af60e1000
mmap(0x7f8af60ee000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28000) = 0x7f8af60ee000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/libutempter.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\t\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=10160, ...}) = 0
mmap(NULL, 2105376, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af5ec2000
mprotect(0x7f8af5ec3000, 2097152, PROT_NONE) = 0
mmap(0x7f8af60c3000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1000) = 0x7f8af60c3000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcrypt.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\21\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=43328, ...}) = 0
mmap(NULL, 234016, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af5e88000
mprotect(0x7f8af5e89000, 36864, PROT_NONE) = 0
mmap(0x7f8af5e89000, 24576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1000) = 0x7f8af5e89000
mmap(0x7f8af5e8f000, 8192, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f8af5e8f000
mmap(0x7f8af5e92000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9000) = 0x7f8af5e92000
mmap(0x7f8af5e94000, 184864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f8af5e94000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpam.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3404\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=64176, ...}) = 0
mmap(NULL, 66168, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af5e77000
mmap(0x7f8af5e7a000, 32768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f8af5e7a000
mmap(0x7f8af5e82000, 16384, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f8af5e82000
mmap(0x7f8af5e86000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe000) = 0x7f8af5e86000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260A\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1820400, ...}) = 0
mmap(NULL, 1832960, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af5cb7000
mprotect(0x7f8af5cd9000, 1654784, PROT_NONE) = 0
mmap(0x7f8af5cd9000, 1339392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7f8af5cd9000
mmap(0x7f8af5e20000, 311296, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x169000) = 0x7f8af5e20000
mmap(0x7f8af5e6d000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b5000) = 0x7f8af5e6d000
mmap(0x7f8af5e73000, 14336, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f8af5e73000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libaudit.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3205\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=128944, ...}) = 0
mmap(NULL, 172200, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af5c8c000
mprotect(0x7f8af5c8f000, 114688, PROT_NONE) = 0
mmap(0x7f8af5c8f000, 28672, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f8af5c8f000
mmap(0x7f8af5c96000, 81920, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xa000) = 0x7f8af5c96000
mmap(0x7f8af5cab000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e000) = 0x7f8af5cab000
mmap(0x7f8af5cad000, 37032, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f8af5cad000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000\21\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=14592, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8af5c8a000
mmap(NULL, 16656, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af5c85000
mmap(0x7f8af5c86000, 4096, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1000) = 0x7f8af5c86000
mmap(0x7f8af5c87000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f8af5c87000
mmap(0x7f8af5c88000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f8af5c88000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcap-ng.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360\"\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=26976, ...}) = 0
mmap(NULL, 29056, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8af5c7d000
mmap(0x7f8af5c7f000, 12288, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f8af5c7f000
mmap(0x7f8af5c82000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7f8af5c82000
mmap(0x7f8af5c83000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7f8af5c83000
close(3) = 0
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8af5c7a000
arch_prctl(ARCH_SET_FS, 0x7f8af5c7a740) = 0
mprotect(0x7f8af5e6d000, 16384, PROT_READ) = 0
mprotect(0x7f8af5c83000, 4096, PROT_READ) = 0
mprotect(0x7f8af5c88000, 4096, PROT_READ) = 0
mprotect(0x7f8af5cab000, 4096, PROT_READ) = 0
mprotect(0x7f8af5e86000, 4096, PROT_READ) = 0
mprotect(0x7f8af5e92000, 4096, PROT_READ) = 0
mprotect(0x7f8af60c3000, 4096, PROT_READ) = 0
mprotect(0x7f8af60ee000, 16384, PROT_READ) = 0
mprotect(0x5649a2ae3000, 4096, PROT_READ) = 0
mprotect(0x7f8af6128000, 4096, PROT_READ) = 0
munmap(0x7f8af60f5000, 45519) = 0
prlimit64(0, RLIMIT_NOFILE, NULL, {rlim_cur=1073741816, rlim_max=1073741816}) = 0
close(1073741815) = -1 EBADF (Bad file descriptor)
close(1073741814) = -1 EBADF (Bad file descriptor)
close(1073741813) = -1 EBADF (Bad file descriptor)
close(1073741812) = -1 EBADF (Bad file descriptor)
close(1073741811) = -1 EBADF (Bad file descriptor)
close(1073741810) = -1 EBADF (Bad file descriptor)
close(1073741809) = -1 EBADF (Bad file descriptor)
close(1073741808) = -1 EBADF (Bad file descriptor)
close(1073741807) = -1 EBADF (Bad file descriptor)
close(1073741806) = -1 EBADF (Bad file descriptor)
close(1073741805) = -1 EBADF (Bad file descriptor)
close(1073741804) = -1 EBADF (Bad file descriptor)
close(1073741803) = -1 EBADF (Bad file descriptor)
... forever
I'm not able to reproduce this issue on 6.3.2, but I'm using Podman instead of Docker:
root@d1ebbfa9e498 / # ps faux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 489 0.0 0.0 6928 3520 pts/7 Ss 12:40 0:00 bash
root 566 0.0 0.0 10736 3040 pts/7 R+ 12:40 0:00 \_ ps faux
root 1 0.0 0.0 16128 9848 ? Ss 12:40 0:00 /usr/bin/python3 -u /sbin/my_init
root 130 0.0 0.0 43496 6880 ? Ss 12:40 0:00 /usr/lib/postfix/sbin/master
postfix 135 0.0 0.0 43836 7040 ? S 12:40 0:00 \_ pickup -l -t unix -u -c
postfix 136 0.0 0.0 43888 7200 ? S 12:40 0:00 \_ qmgr -l -t unix -u
root 168 0.0 0.0 8712 2088 ? Ss 12:40 0:00 /usr/bin/SCREEN -S hypernode_service_nginx -d -m /usr/sbin/nginx -g daemon off; master_process on;
root 170 0.1 0.1 64544 34484 pts/0 Ss+ 12:40 0:00 \_ nginx: master process /usr/sbin/nginx -g daemon off; master_process on;
app 174 0.0 0.0 64828 30996 pts/0 S+ 12:40 0:00 \_ nginx: worker process
app 175 0.0 0.0 64828 30036 pts/0 S+ 12:40 0:00 \_ nginx: worker process
app 176 0.0 0.0 64828 30036 pts/0 S+ 12:40 0:00 \_ nginx: worker process
app 177 0.0 0.0 64828 30036 pts/0 S+ 12:40 0:00 \_ nginx: worker process
root 171 0.0 0.0 8712 2088 ? Ss 12:40 0:00 /usr/bin/SCREEN -S hypernode_service_redis -d -m /usr/bin/redis-server /etc/redis/redis.conf --daemonize no
root 173 0.2 0.0 60164 15284 pts/1 Ssl+ 12:40 0:00 \_ /usr/bin/redis-server 127.0.0.1:6379
root 178 0.0 0.0 8712 2088 ? Ss 12:40 0:00 /usr/bin/SCREEN -S hypernode_service_php -d -m /usr/sbin/php-fpm8.2 --nodaemonize --fpm-config /etc/php/8.2/fpm/php-fpm.conf
root 180 0.1 0.1 380832 43944 pts/2 Ss+ 12:40 0:00 \_ php-fpm: master process (/etc/php/8.2/fpm/php-fpm.conf)
app 190 0.0 0.0 381180 26608 pts/2 S+ 12:40 0:00 \_ php-fpm: pool www
app 191 0.0 0.0 381116 17808 pts/2 S+ 12:40 0:00 \_ php-fpm: pool www
root 187 0.0 0.0 8712 2088 ? Ss 12:40 0:00 /usr/bin/SCREEN -S hypernode_service_mysql -d -m bash -c until /usr/sbin/mysqld; do sleep 1; done
root 189 0.1 0.0 6580 2880 pts/3 Ss+ 12:40 0:00 \_ bash -c until /usr/sbin/mysqld; do sleep 1; done
mysql 195 2.6 0.5 2178860 185264 pts/3 Sl+ 12:40 0:00 \_ /usr/sbin/mysqld
root 192 0.0 0.0 8712 2088 ? Ss 12:40 0:00 /usr/bin/SCREEN -S hypernode_service_mailhog -d -m /usr/bin/mailhog
root 194 0.1 0.0 12256 7200 pts/4 Ssl+ 12:40 0:00 \_ /usr/bin/mailhog
root 231 0.0 0.0 8712 2088 ? Ss 12:40 0:00 /usr/bin/SCREEN -S hypernode_service_varnish -d -m /usr/sbin/varnishd -j unix,user=varnish,ccgroup=varnish -F -a :6081 -p vcc_allow_inline_c=on -p thread_pool_stack=
varnish 233 0.3 0.0 18524 11268 pts/5 SLs+ 12:40 0:00 \_ /usr/sbin/varnishd -j unix,user=varnish,ccgroup=varnish -F -a :6081 -p vcc_allow_inline_c=on -p thread_pool_stack=256k -p thread_pool_add_delay=2 -p thread_pools
vcache 300 0.1 0.2 281100 67884 pts/5 SLl+ 12:40 0:00 \_ /usr/sbin/varnishd -j unix,user=varnish,ccgroup=varnish -F -a :6081 -p vcc_allow_inline_c=on -p thread_pool_stack=256k -p thread_pool_add_delay=2 -p thread_p
root 234 0.0 0.0 8712 2088 ? Ss 12:40 0:00 /usr/bin/SCREEN -S hypernode_service_elasticsearch -d -m bash -c mkdir -p /var/run/elasticsearch; chown -R elasticsearch /var/run/elasticsearch; su -s /bin/bash -c '
root 236 0.0 0.0 6712 3200 pts/6 Ss+ 12:40 0:00 \_ bash -c mkdir -p /var/run/elasticsearch; chown -R elasticsearch /var/run/elasticsearch; su -s /bin/bash -c 'source /etc/default/elasticsearch; export ES_HOME=/us
root 240 0.0 0.0 12076 4320 pts/6 S+ 12:40 0:00 \_ su -s /bin/bash -c source /etc/default/elasticsearch; export ES_HOME=/usr/share/elasticsearch; export ES_PATH_CONF=/etc/elasticsearch; export PID_DIR=/var/ru
elastic+ 241 0.0 0.0 6580 2880 ? Ss 12:40 0:00 \_ bash -c source /etc/default/elasticsearch; export ES_HOME=/usr/share/elasticsearch; export ES_PATH_CONF=/etc/elasticsearch; export PID_DIR=/var/run/elast
elastic+ 242 52.5 6.0 10079416 1975116 ? Sl 12:40 0:11 \_ /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.aw
root 545 0.0 0.0 2312 1280 ? S 12:40 0:00 /usr/bin/runsvdir -P /etc/service
root 546 0.0 0.0 2160 1120 ? Ss 12:40 0:00 \_ runsv cron
root 549 0.0 0.0 8440 2720 ? S 12:40 0:00 | \_ /usr/sbin/cron -f
root 547 0.0 0.0 2160 1120 ? Ss 12:40 0:00 \_ runsv sshd
root 550 0.0 0.0 13820 7200 ? S 12:40 0:00 | \_ /usr/sbin/sshd -D
root 548 0.0 0.0 2160 1120 ? Ss 12:40 0:00 \_ runsv irqbalance
root 551 0.0 0.0 82276 3680 ? Sl 12:40 0:00 \_ /usr/sbin/irqbalance --foreground
Screen working just fine:
root@d1ebbfa9e498 / # screen -ls
There are screens on:
652.test (05/19/2023 12:41:12 PM) (Detached)
192.hypernode_service_mailhog (05/19/2023 12:40:24 PM) (Detached)
231.hypernode_service_varnish (05/19/2023 12:40:24 PM) (Detached)
234.hypernode_service_elasticsearch (05/19/2023 12:40:24 PM) (Detached)
187.hypernode_service_mysql (05/19/2023 12:40:24 PM) (Detached)
171.hypernode_service_redis (05/19/2023 12:40:24 PM) (Detached)
168.hypernode_service_nginx (05/19/2023 12:40:24 PM) (Detached)
178.hypernode_service_php (05/19/2023 12:40:24 PM) (Detached)
8 Sockets in /run/screen/S-root.
I'm running Podman in rootless mode. My podman info
:
host:
arch: amd64
buildahVersion: 1.30.0
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: /usr/bin/conmon is owned by conmon 1:2.1.7-1
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: f633919178f6c8ee4fb41b848a056ec33f8d707d'
cpuUtilization:
idlePercent: 96.46
systemPercent: 0.73
userPercent: 2.82
cpus: 20
databaseBackend: boltdb
distribution:
distribution: arch
version: unknown
eventLogger: journald
hostname: alpha
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.3.2-arch1-1
linkmode: dynamic
logDriver: journald
memFree: 9783197696
memTotal: 33420230656
networkBackend: netavark
ociRuntime:
name: crun
package: /usr/bin/crun is owned by crun 1.8.4-1
path: /usr/bin/crun
version: |-
crun version 1.8.4
commit: 5a8fa99a5e41facba2eda4af12fa26313918805b
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.0-1
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 0
swapTotal: 0
uptime: 5h 25m 5.00s (Approximately 0.21 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /home/alex/.config/containers/storage.conf
containerStore:
number: 37
paused: 0
running: 1
stopped: 36
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/alex/.local/share/containers/storage
graphRootAllocated: 490563387392
graphRootUsed: 341384134656
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 350
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/alex/.local/share/containers/storage/volumes
version:
APIVersion: 4.5.0
Built: 1684095102
BuiltTime: Sun May 14 22:11:42 2023
GitCommit: 75e3c12579d391b81d871fd1cded6cf0d043550a-dirty
GoVersion: go1.20.4
Os: linux
OsArch: linux/amd64
Version: 4.5.0
Perhaps Docker has been updated? Perhaps your kernel version changed something in regard to your Docker installation?
@AlexanderGrooff @vdloo Manjaro (which I am using) is Arch based. I think it has to do with versions or something, since the problems started after I did a system upgrade. With the kernel downgrade, I also did a docker downgrade. Actually, I did the Docker downgrade up front, but it had no impact. So after that I downgraded the kernel too. So it might have to do with both of the versions or a combination of both.
Current version:
aq@aq-xps ~> docker -v
Docker version 20.10.23, build 715524332f
I cannot connect to several environments I created a couple of days ago. I think upgrading my system (Manjaro Linux) might be the cause. How can solve or debug this? Help is very much appriciated.
I created a fresh one to test, same problem.
I think the box is in a loop, the services don't seem to start, at least that's what I think based on the docker logs. When I try to connect it is just refused (SSH service not started?)
I can ping
Logs
Docker version
Extra info