Closed jacotec closed 5 days ago
Seeing also these in the logs:
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9001: Connection refused
watchdog-mailcow-1 | connect to address 172.22.1.7 and port 9002: Connection refused
Looks like it's all down to these ports 9001 and 9002 ... what is this and why are the connections refused?
Hi Marco,
having exactly the same issues after running the update today.
Regards Chris
Hi Marco,
having exactly the same issues after running the update today.
Regards Chris
Did you solve this? Or also still down?
I ran into the same issue. For me it was solved by removing the search
option from the file /etc/resolv.conf
on the host and restarting mailcow’s PHP FPM container.
The same approach was already helped for a similar situation in January #5646.
@sbonfert There is a "search" entry, yes ... but when I remove it and save /etc/resolv.conf the entry is back in there in a second. No idea where it comes from, it's not in the netplan config
Hi Marco, having exactly the same issues after running the update today. Regards Chris
Did you solve this? Or also still down?
Still down (for 2 hours now, couldn, but will try the option sbonfert suggested now.
Hi again,
removing the search option didn't fix the issue, still getting
watchdog-mailcow-1 | connect to address 172.22.1.12 and port 9001: Connection refused watchdog-mailcow-1 | connect to address 172.22.1.12 and port 9002: Connection refused watchdog-mailcow-1 | connect to address 172.22.1.12 and port 9001: Connection refused watchdog-mailcow-1 | connect to address 172.22.1.12 and port 9002: Connection refused
and
rspamd-mailcow-1 | Waiting for PHP on port 9001...
in the logs after the reboot of the server.
Regards Chris
So we need hope that someone from the Mailcow devs sees this to help us. I should have taken a snapshot before the upgrade :-(
Maybe try a docker update first. All versions i've tested on use at least Docker 25.0.4, maybe there is something wrong inside the docker daemon.
I'm using Docker version 27.0.2, build 912c1dd
Can someone give me PHP Logs?
php-fpm-mailcow-1 | Waiting for SQL...
php-fpm-mailcow-1 | Uptime: 1 Threads: 2 Questions: 1 Slow queries: 0 Opens: 17 Open tables: 10 Queries per second avg: 1.000
Does the mysql logs say something?
Similar here:
php-fpm-mailcow-1 | Waiting for SQL...
php-fpm-mailcow-1 | Waiting for SQL...
php-fpm-mailcow-1 | Waiting for SQL...
php-fpm-mailcow-1 | Waiting for SQL...
php-fpm-mailcow-1 | Waiting for SQL...
php-fpm-mailcow-1 | Waiting for SQL...
php-fpm-mailcow-1 | Uptime: 3 Threads: 2 Questions: 4 Slow queries: 0 Opens: 17 Open tables: 10 Queries per second avg: 1.333
@DerLinkman Docker updated to 27.0.2, no change
Does the mysql logs say something?
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] Starting MariaDB 10.5.25-MariaDB-ubu2004 source revision 29c185bd771ac441121468b3850d6dc8d13b8a1f as process 1
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Uses event mutexes
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Number of pools: 1
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Using Linux native AIO
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Initializing buffer pool, total size = 25165824, chunk size = 25165824
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Completed initialization of buffer pool
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: 128 rollback segments are active.
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: 10.5.25 started; log sequence number 1714851094647; transaction id 897410867
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] Server socket created on IP: '::'.
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Warning] 'proxies_priv' entry '@% root@2ced8fca0e4c' ignored in --skip-name-resolve mode.
mysql-mailcow-1 | 2024-06-27 16:41:49 1 [Note] Event Scheduler: scheduler thread started with id 1
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] InnoDB: Buffer pool(s) load completed at 240627 16:41:49
mysql-mailcow-1 | 2024-06-27 16:41:49 0 [Note] mysqld: ready for connections.
mysql-mailcow-1 | Version: '10.5.25-MariaDB-ubu2004' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] mysqld (initiated by: unknown): Normal shutdown
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] Event Scheduler: Killing the scheduler thread, thread id 1
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] Event Scheduler: Waiting for the scheduler thread to reply
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] Event Scheduler: Stopped
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] InnoDB: FTS optimize thread exiting.
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] InnoDB: Starting shutdown...
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] InnoDB: Restricted to 378 pages due to innodb_buf_pool_dump_pct=25
mysql-mailcow-1 | 2024-06-27 16:42:01 0 [Note] InnoDB: Buffer pool(s) dump completed at 240627 16:42:01
mysql-mailcow-1 | 2024-06-27 16:42:02 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
mysql-mailcow-1 | 2024-06-27 16:42:02 0 [Note] InnoDB: Shutdown completed; log sequence number 1714851095248; transaction id 897410895
mysql-mailcow-1 | 2024-06-27 16:42:03 0 [Note] Event Scheduler: Purging the queue. 3 events
mysql-mailcow-1 | 2024-06-27 16:42:03 0 [Note] mysqld: Shutdown complete
mysql-mailcow-1 |
mysql-mailcow-1 | 2024-06-27 16:43:42+02:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.5.25+maria~ubu2004 started.
mysql-mailcow-1 | 2024-06-27 16:43:49+02:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
mysql-mailcow-1 | 2024-06-27 16:43:49+02:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.5.25+maria~ubu2004 started.
mysql-mailcow-1 | 2024-06-27 16:43:49+02:00 [Note] [Entrypoint]: MariaDB heathcheck configation file missing, assuming desirable
mysql-mailcow-1 | 2024-06-27 16:43:49+02:00 [Note] [Entrypoint]: MariaDB upgrade (mysql_upgrade or creating healthcheck users) required, but skipped due to $MARIADB_AUTO_UPGRADE setting
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] Starting MariaDB 10.5.25-MariaDB-ubu2004 source revision 29c185bd771ac441121468b3850d6dc8d13b8a1f as process 1
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] InnoDB: Uses event mutexes
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] InnoDB: Number of pools: 1
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] InnoDB: Using Linux native AIO
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] InnoDB: Initializing buffer pool, total size = 25165824, chunk size = 25165824
mysql-mailcow-1 | 2024-06-27 16:43:49 0 [Note] InnoDB: Completed initialization of buffer pool
mysql-mailcow-1 | 2024-06-27 16:43:50 0 [Note] InnoDB: 128 rollback segments are active.
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Note] InnoDB: 10.5.25 started; log sequence number 1714851095248; transaction id 897410896
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Note] Server socket created on IP: '::'.
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Warning] 'proxies_priv' entry '@% root@2ced8fca0e4c' ignored in --skip-name-resolve mode.
mysql-mailcow-1 | 2024-06-27 16:43:51 1 [Note] Event Scheduler: scheduler thread started with id 1
mysql-mailcow-1 | 2024-06-27 16:43:51 0 [Note] mysqld: ready for connections.
mysql-mailcow-1 | Version: '10.5.25-MariaDB-ubu2004' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
mysql-mailcow-1 | 2024-06-27 16:43:56 0 [Note] InnoDB: Buffer pool(s) load completed at 240627 16:43:56
mysql-mailcow-1 | 2024-06-27 16:44:13 28 [Note] Detected table cache mutex contention at instance 1: 31% waits. Additional table cache instance activated. Number of instances after activation: 2.
Does the mysql logs say something?
Nope.
mysql-mailcow-1 | 2024-06-27 16:24:38+02:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.5.25+maria~ubu2004 started.
mysql-mailcow-1 | 2024-06-27 16:24:38+02:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
mysql-mailcow-1 | 2024-06-27 16:24:38+02:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.5.25+maria~ubu2004 started.
mysql-mailcow-1 | 2024-06-27 16:24:39+02:00 [Note] [Entrypoint]: MariaDB upgrade not required
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] Starting MariaDB 10.5.25-MariaDB-ubu2004 source revision 29c185bd771ac441121468b3850d6dc8d13b8a1f as process 1
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Uses event mutexes
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Number of pools: 1
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Using Linux native AIO
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Initializing buffer pool, total size = 25165824, chunk size = 25165824
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Completed initialization of buffer pool
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: 128 rollback segments are active.
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: 10.5.25 started; log sequence number 170343725; transaction id 803617
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] InnoDB: Buffer pool(s) load completed at 240627 16:24:39
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] Server socket created on IP: '::'.
mysql-mailcow-1 | 2024-06-27 16:24:39 1 [Note] Event Scheduler: scheduler thread started with id 1
mysql-mailcow-1 | 2024-06-27 16:24:39 0 [Note] mysqld: ready for connections.
mysql-mailcow-1 | Version: '10.5.25-MariaDB-ubu2004' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
What about rolling back the php container? That helped a few months ago already ...?
That will mostlikely work but i need more infos otherwise i cannot debug it. I cannot recreate it.
My guess is something is off regarding the mysql-client inside the php container but i need to get what exactly.
@DerLinkman Sure, fully understandable. How can we help you more? But at some point we need mail services back ;-)
Got something guys :)
I ran into the same issue. For me it was solved by removing the
search
option from the file/etc/resolv.conf
on the host and restarting mailcow’s PHP FPM container. The same approach was already helped for a similar situation in January #5646.
This really helps. I had to comment it out in my netplan config. Honestly, I'm not too keen on keeping a weird workaround like this applied ‒ but maybe it can help @DerLinkman find out what's happening here (again?)
By the way ‒ I'm fine with reverting my "fix" temporarily and helping debug, if it's needed 😄
That's my netplan config:
# This is the network config written by 'subiquity'
network:
ethernets:
ens160:
dhcp6: no
dhcp4: no
addresses:
- 10.0.1.3/12
- 2003:a:650:4e00::a00:103/64
gateway4: 10.0.0.1
gateway6: 2003:a:650:4e00::1
nameservers:
addresses:
- 10.0.0.1
- 10.0.0.2
- 2003:a:650:4e00::1
version: 2
There is no freakin' search domain configured at all, but someone writes it back into my /etc/resolv.conf every minute. I don't have the search domain configured anywhere - how and why does it get there?
And how can I delete it? Would try this ...
Let me try pushing a newer PHP Container with alpine 3.20.1 which patches curl.
I've published the PHP Container over 20 days ago before Alpine 3.20.1 has been released so maybe that solves it already...
@jacotec Maybe you have multiple netplan configs? If not, you can try specifically declaring the search domain as an empty array, as in this article: https://www.howtogeek.com/devops/how-to-set-dns-search-order-in-ubuntu-18-04-using-netplan/
Removing domain and search from my /etc/resolv.conf fixed it for me aswell (until i restart and it gets reset) OS: Debian
Removing domain and search from my /etc/resolv.conf fixed it for me aswell (until i restart and it gets reset) OS: Debian
Sounds exactly like last time... why did they not patched this bug already in 3.20? Why only in 3.20.1? What a crap... i'm sorry like last time it does not apear everywhere.
P.S: Of course it took years to build this (hopefully) patched image... sure, why not!?!?
Ah image is online! Simply re-run docker compose pull and it should pull it. Then docker compose up -d and it hopefully works, even with the search parameters in it...
@DerLinkman Bad news: No change with the new image :-(
And it's back...
Same here does not work. Removing the parameters gets it up and running again
Yap ... everyone is still waiting for ports 9001 and 9002
Same here does not work. Removing the parameters gets it up and running again
... which still does not work here. Even with the additonal
search: []
parameter in netplan the search domain is still there and gets recreated every minute.
Does a downgrade of php works?
I cannot reproduce the problem. Is the PHP-FPM container just waiting for MySQL, or is there something more in the container logs?
Can someone execute this command from inside the php-fpm Container and see if there is a valid response?
curl --silent --insecure https://dockerapi/containers/json
d41cdc7e9916:/# curl --silent --insecure https://dockerapi/containers/json
d41cdc7e9916:/# curl --insecure https://dockerapi/containers/json
curl: (92) HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1)
@YeapGuy thanks, please add the -v
flag also and post the output
curl --insecure -v https://dockerapi/containers/json
d41cdc7e9916:/# curl --insecure -v https://dockerapi/containers/json
* Host dockerapi:443 was resolved.
* IPv6: (none)
* IPv4: 95.105.221.201
* Trying 95.105.221.201:443...
* Connected to dockerapi (95.105.221.201) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: CN=*
* start date: Mar 9 11:15:53 2024 GMT
* expire date: Jul 11 11:15:53 3023 GMT
* issuer: CN=*
* SSL certificate verify result: self-signed certificate (18), continuing anyway.
* Certificate level 0: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://dockerapi/containers/json
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: dockerapi]
* [HTTP/2] [1] [:path: /containers/json]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
> GET /containers/json HTTP/2
> Host: dockerapi
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1)
* Connection #0 to host dockerapi left intact
curl: (92) HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1)
mhm somehow curl resolved dockerapi to 95.105.221.201
and not to the right docker container ip
mhm somehow curl resolved dockerapi to
95.105.221.201
and not to the right docker container ip
Same here just another IP . Maybe thats cause my domain(the one in resolv.conf) has a normal A record and Docker somehow wont use my internal DNS..
mhm somehow curl resolved dockerapi to
95.105.221.201
and not to the right docker container ip
Without the search domain set, it resolves correctly. I think that's the curl bug we were talking about. Apparently it's still not quite fixed?
Can affected people please try both commands below and check if makes a difference?
docker compose exec php-fpm-mailcow bash -c 'curl --insecure https://dockerapi/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" 2> /dev/null | jq -rc "select( .name | tostring | contains(\"mysql-mailcow\")) | select( .project | tostring)"'
docker compose exec php-fpm-mailcow bash -c 'curl --insecure https://dockerapi./containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" 2> /dev/null | jq -rc "select( .name | tostring | contains(\"mysql-mailcow\")) | select( .project | tostring)"'
Please try a docker compose pull and up -d again...
Please try a docker compose pull and up -d again...
Works! But let me restart the Server and check if it still works after that.. Edit: Still works thanks!
To share an update.
Following @FreddleSpl0it's idea, we did some further tests with affected people in the Telegram group. We realized that this nslookup wasn't working as expected:
$ docker compose exec php-fpm-mailcow nslookup dockerapi
Server: 127.0.0.11
Address: 127.0.0.11:53
** server can't find dockerapi.domain.tld: NXDOMAIN
** server can't find dockerapi.domain.tld: NXDOMAIN
It seems to append the search domain from the host to look for the IP, which isn't valid obviously. Hence, the curl request fails.
Using the final dot (to not cause appending the search domain and signalize the end of the domain), it works:
$ docker compose exec php-fpm-mailcow nslookup dockerapi.
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
Name: dockerapi
Address: 172.22.1.3
Non-authoritative answer:
Name: dockerapi
Address: fd4d:6169:6c63:6f77::5
In @DerLinkman's new container image the .
was added, ending in dockerapi.
. This seems to work.
So, might be indeed a bug in AlpineLinux's curl and how its doing DNS resolution compared to other tools.
Edit: The strange part hereby is, that it did not affect all installations. The update of my private instance just before worked fine, as well as by @Thomas2500. Internal mailcow installations by @DerLinkman did not experience these issues either.
Fixed within 2024-06
Awesome, @DerLinkman ! Sorry, I was driving home from the office ... directly back home I've pulled the container again and also here ot works.
Awesome job!
Please also thank @patschi and @FreddleSpl0it :)
today update (...a) runs into unhealthy unbound for me on Ubuntu 22.04.4 (Docker version 27.0.2, build 912c1dd and Docker Compose version v2.7.0). Anyone else or anyone a solution for that?
EDIT: after a docker-compose pull and a view minutes of unhealthy unbound, health.log shows:
2024-06-27 22:35:47: Starting health check 2024-06-27 22:35:53: Healthcheck: Ping Checks WORKING properly! 2024-06-27 22:35:53: Healthcheck: DNS Resolver WORKING properly! 2024-06-27 22:35:53: Healthcheck: ALL CHECKS WERE SUCCESSFUL! Unbound is healthy!
EDIT 2 - FYI: no problems on Debian 12 with Docker version 27.0.2, build 912c1dd and Docker Compose version v2.7.0
Anyone else or anyone a solution for that?
Unfortunately, this seems to be a quite "common error" and has also occurred to me from one update to the next. If you search in the issues, you will find several users with the same phenomenon.
My workaround: Set the variable SKIP_UNBOUND_HEALTHCHECK=y
in mailcow.conf
. Unfortunately not a real solution, but it suppresses the annoying false negative.
I am open for a sustainable solution...
Contribution guidelines
I've found a bug and checked that ...
Description
Logs:
Steps to reproduce:
Which branch are you using?
master
Which architecture are you using?
x86
Operating System:
Ubuntu 22.04 LTS
Server/VM specifications:
16GB RAM, 8vCPU
Is Apparmor, SELinux or similar active?
no
Virtualization technology:
ESXi 7.0U2
Docker version:
25.0.3
docker-compose version or docker compose version:
v2.6.1
mailcow version:
2024-06
Reverse proxy:
HAProxy (on a different VM)
Logs of git diff:
Logs of iptables -L -vn:
Logs of ip6tables -L -vn:
Logs of iptables -L -vn -t nat:
Logs of ip6tables -L -vn -t nat:
DNS check: