truecharts / charts

Community Helm Chart Repository
https://truecharts.org
GNU Affero General Public License v3.0
1.13k stars 620 forks source link

tt-rss php7.4-fpm.sock error on latest chart version #8647

Closed ray73864 closed 8 months ago

ray73864 commented 1 year ago

App Name

tt-rss

SCALE Version

22.02.3

App Version

2.09113_11.0.0

Application Events

2023-05-01 8:19:51
Readiness probe failed: HTTP probe failed with statuscode: 500
2023-05-01 8:19:50
Created container postgres
2023-05-01 8:19:50
Started container postgres
2023-05-01 8:19:50
Readiness probe failed: Get "http://172.16.3.62:8000/readyz": dial tcp 172.16.3.62:8000: connect: connection refused
2023-05-01 8:19:40
Container image "ghcr.io/cloudnative-pg/postgresql:15.2" already present on machine
2023-05-01 8:19:34
Created container bootstrap-controller
2023-05-01 8:19:34
Started container bootstrap-controller
2023-05-01 8:19:24
Add eth0 [172.16.3.62/16] from ix-net
2023-05-01 8:19:24
Container image "ghcr.io/cloudnative-pg/cloudnative-pg:1.19.0" already present on machine
2023-05-01 8:19:15
Job completed
2023-05-01 8:19:15
Successfully assigned ix-tt-rss/tt-rss-cnpg-main-2 to ix-truenas
2023-05-01 8:18:29
Created container join
2023-05-01 8:18:29
Started container join
2023-05-01 8:18:19
Container image "ghcr.io/cloudnative-pg/postgresql:15.2" already present on machine
2023-05-01 8:18:14
Created container bootstrap-controller
2023-05-01 8:18:14
Started container bootstrap-controller
2023-05-01 8:18:07
Container image "tccr.io/truecharts/tt-rss:v2.0.9113@sha256:ef3e084ba91d3e9ed32af6b948e17c88e33223fa80ffe5c8bb4d5dac9aa0e9b9" already present on machine
2023-05-01 8:18:07
Created container tt-rss
2023-05-01 8:18:07
Started container tt-rss
2023-05-01 8:18:07
Add eth0 [172.16.3.61/16] from ix-net
2023-05-01 8:18:07
Container image "ghcr.io/cloudnative-pg/cloudnative-pg:1.19.0" already present on machine
2023-05-01 8:18:02
Successfully assigned ix-tt-rss/tt-rss-cnpg-main-2-join-wbhhw to ix-truenas
2023-05-01 8:18:01
Successfully provisioned volume pvc-b50e4d16-a01a-48be-8952-8dc0c2425481
2023-05-01 8:18:01
Successfully provisioned volume pvc-33a4997d-54cb-4b8d-9c1f-564294288557
2023-05-01 8:18:00
Creating instance tt-rss-cnpg-main-2
2023-05-01 8:18:00
Created pod: tt-rss-cnpg-main-2-join-wbhhw
2023-05-01 8:18:00
0/1 nodes are available: 1 persistentvolumeclaim "tt-rss-cnpg-main-2-wal" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
2023-05-01 8:18:00
External provisioner is provisioning volume for claim "ix-tt-rss/tt-rss-cnpg-main-2"
2023-05-01 8:18:00
External provisioner is provisioning volume for claim "ix-tt-rss/tt-rss-cnpg-main-2-wal"
2023-05-01 8:18:00
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
2023-05-01 8:18:00
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
2023-05-01 8:17:52
Readiness probe failed: HTTP probe failed with statuscode: 500
2023-05-01 8:17:51
Readiness probe failed: Get "http://172.16.3.60:8000/readyz": dial tcp 172.16.3.60:8000: connect: connection refused
2023-05-01 8:17:50
Created container postgres
2023-05-01 8:17:50
Started container postgres
2023-05-01 8:17:39
Container image "ghcr.io/cloudnative-pg/postgresql:15.2" already present on machine
2023-05-01 8:17:34
Created container bootstrap-controller
2023-05-01 8:17:34
Started container bootstrap-controller
2023-05-01 8:17:29
Add eth0 [172.16.3.60/16] from ix-net
2023-05-01 8:17:29
Container image "ghcr.io/cloudnative-pg/cloudnative-pg:1.19.0" already present on machine
2023-05-01 8:17:19
Job completed
2023-05-01 8:17:19
Successfully assigned ix-tt-rss/tt-rss-cnpg-main-1 to ix-truenas
2023-05-01 8:17:06
Created container pgbouncer
2023-05-01 8:17:06
Created container initdb
2023-05-01 8:17:06
Created container pgbouncer
2023-05-01 8:17:06
Started container initdb
2023-05-01 8:17:06
Started container pgbouncer
2023-05-01 8:17:06
Started container pgbouncer
2023-05-01 8:17:01
Successfully pulled image "ghcr.io/cloudnative-pg/pgbouncer:1.18.0" in 43.702388746s
2023-05-01 8:16:40
Successfully pulled image "ghcr.io/cloudnative-pg/pgbouncer:1.18.0" in 23.157686533s
2023-05-01 8:16:30
Started container tt-rss-system-cnpg-wait
2023-05-01 8:16:30
Container image "ghcr.io/cloudnative-pg/postgresql:15.2" already present on machine
2023-05-01 8:16:29
Created container tt-rss-system-cnpg-wait
2023-05-01 8:16:23
Add eth0 [172.16.3.59/16] from ix-net
2023-05-01 8:16:23
Container image "tccr.io/truecharts/db-wait-postgres:1.1.0@sha256:a163c7836d7bb436a428f5d55bbba0eb73bcdb9bc202047e2523bbb539c113e6" already present on machine
2023-05-01 8:16:23
Created container bootstrap-controller
2023-05-01 8:16:23
Started container bootstrap-controller
2023-05-01 8:16:22
Add eth0 [172.16.3.58/16] from ix-net
2023-05-01 8:16:22
Container image "ghcr.io/cloudnative-pg/cloudnative-pg:1.19.0" already present on machine
2023-05-01 8:16:17
Pulling image "ghcr.io/cloudnative-pg/pgbouncer:1.18.0"
2023-05-01 8:16:17
Pulling image "ghcr.io/cloudnative-pg/pgbouncer:1.18.0"
2023-05-01 8:16:16
Successfully assigned ix-tt-rss/tt-rss-77444564bc-j5vsp to ix-truenas
2023-05-01 8:16:16
Successfully assigned ix-tt-rss/tt-rss-cnpg-main-1-initdb-ksh5c to ix-truenas
2023-05-01 8:16:15
Created container bootstrap-controller
2023-05-01 8:16:15
Successfully provisioned volume pvc-8704d49d-831b-4e72-82c6-ff32f75d629a
2023-05-01 8:16:15
Started container bootstrap-controller
2023-05-01 8:16:15
Started container bootstrap-controller
2023-05-01 8:16:15
Updated LoadBalancer with new IPs: [] -> [192.168.234.31]
2023-05-01 8:16:15
Successfully provisioned volume pvc-6f76596c-8e09-4989-9b45-40476f95cc86
2023-05-01 8:16:14
Successfully provisioned volume pvc-c619aa7c-db8d-4f21-b03d-502e10a85335
2023-05-01 8:16:14
Successfully provisioned volume pvc-ed0392ce-8202-472b-808c-10d06f1587cc
2023-05-01 8:16:14
Successfully provisioned volume pvc-923724ad-0572-42a0-825c-46c1b7fb4b4f
2023-05-01 8:16:14
Add eth0 [172.16.3.56/16] from ix-net
2023-05-01 8:16:14
Container image "ghcr.io/cloudnative-pg/cloudnative-pg:1.19.0" already present on machine
2023-05-01 8:16:14
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
2023-05-01 8:16:14
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
2023-05-01 8:16:14
Add eth0 [172.16.3.57/16] from ix-net
2023-05-01 8:16:14
Container image "ghcr.io/cloudnative-pg/cloudnative-pg:1.19.0" already present on machine
2023-05-01 8:16:14
Created container bootstrap-controller
2023-05-01 8:16:13
Created new CertificateRequest resource "tt-rss-tls-0-mxzgz"
2023-05-01 8:16:13
Not signing CertificateRequest until it is Approved
2023-05-01 8:16:13
Not signing CertificateRequest until it is Approved
2023-05-01 8:16:13
Not signing CertificateRequest until it is Approved
2023-05-01 8:16:13
Not signing CertificateRequest until it is Approved
2023-05-01 8:16:13
Not signing CertificateRequest until it is Approved
2023-05-01 8:16:13
Certificate request has been approved by cert-manager.io
2023-05-01 8:16:13
Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "cert-manager" not found
2023-05-01 8:16:13
Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "cert-manager" not found
2023-05-01 8:16:13
Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "cert-manager" not found
2023-05-01 8:16:13
Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "cert-manager" not found
2023-05-01 8:16:13
Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "cert-manager" not found
2023-05-01 8:16:13
Creating PodDisruptionBudget tt-rss-cnpg-main-primary
2023-05-01 8:16:13
Creating ServiceAccount
2023-05-01 8:16:13
Creating Cluster Role
2023-05-01 8:16:13
No matching pods found
2023-05-01 8:16:13
Primary instance (initdb)
2023-05-01 8:16:13
External provisioner is provisioning volume for claim "ix-tt-rss/tt-rss-cnpg-main-1"
2023-05-01 8:16:13
External provisioner is provisioning volume for claim "ix-tt-rss/tt-rss-cnpg-main-1-wal"
2023-05-01 8:16:13
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
2023-05-01 8:16:13
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
2023-05-01 8:16:13
Created pod: tt-rss-cnpg-main-1-initdb-ksh5c
2023-05-01 8:16:13
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
2023-05-01 8:16:13
Scaled up replica set tt-rss-cnpg-main-rw-cdb5b888f to 2
2023-05-01 8:16:13
Created pod: tt-rss-cnpg-main-rw-cdb5b888f-z5mhz
2023-05-01 8:16:13
Successfully assigned ix-tt-rss/tt-rss-cnpg-main-rw-cdb5b888f-z5mhz to ix-truenas
2023-05-01 8:16:13
Created pod: tt-rss-cnpg-main-rw-cdb5b888f-qmjf2
2023-05-01 8:16:13
Successfully assigned ix-tt-rss/tt-rss-cnpg-main-rw-cdb5b888f-qmjf2 to ix-truenas
2023-05-01 8:16:12
Job completed
2023-05-01 8:16:12
Ensuring load balancer
2023-05-01 8:16:12
External provisioner is provisioning volume for claim "ix-tt-rss/tt-rss-themes"
2023-05-01 8:16:12
External provisioner is provisioning volume for claim "ix-tt-rss/tt-rss-plugins"
2023-05-01 8:16:12
Applied LoadBalancer DaemonSet kube-system/svclb-tt-rss-ec16777d
2023-05-01 8:16:12
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
2023-05-01 8:16:12
Scaled up replica set tt-rss-77444564bc to 1
2023-05-01 8:16:12
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
2023-05-01 8:16:12
Issuing certificate as Secret does not exist
2023-05-01 8:16:12
Stored new private key in temporary Secret resource "tt-rss-tls-0-bzbbl"
2023-05-01 8:16:12
Created pod: tt-rss-77444564bc-j5vsp
2023-05-01 8:16:12
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
2023-05-01 8:16:12
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
2023-05-01 8:16:12
External provisioner is provisioning volume for claim "ix-tt-rss/tt-rss-config"
2023-05-01 8:16:06
Startup probe errored: rpc error: code = Unknown desc = container not running (c18061c01815be6c435b3f72b7a7b77fcb9beec67f3dbf9d3a4e9df4aff8c680)
2023-05-01 8:15:55
Created container tt-rss-manifests
2023-05-01 8:15:55
Started container tt-rss-manifests
2023-05-01 8:15:45
Add eth0 [172.16.3.54/16] from ix-net
2023-05-01 8:15:45
Container image "tccr.io/truecharts/kubectl:v1.26.0@sha256:323ab7aa3e7ce84c024df79d0f364282c1135499298f54be2ade46508a116c4b" already present on machine
2023-05-01 8:15:35
Created pod: tt-rss-manifests-6mxwc
2023-05-01 8:15:35
Successfully assigned ix-tt-rss/tt-rss-manifests-6mxwc to ix-truenas

Application Logs

Application Name:tt-rss Pod Name:tt-rss-77444564bc-j5vsp Container Name:tt-rss

2023-05-01 00:18:07.837311+00:00tt-rss-cnpg-main-rw:5432 - accepting connections
2023-05-01 00:18:07.901067+00:00ERROR:  relation "ttrss_version" does not exist
2023-05-01 00:18:07.901087+00:00LINE 1: select * from ttrss_version
2023-05-01 00:18:07.901091+00:00^
2023-05-01 00:18:07.928925+00:00sudo: unable to send audit message: Operation not permitted
2023-05-01 00:18:07.989506+00:00[00:18:07/34] Lock: update.lock
2023-05-01 00:18:07.990012+00:00[00:18:07/34] Proceeding to update without confirmation.
2023-05-01 00:18:07.990300+00:00[00:18:07/34] Loading base database schema...
2023-05-01 00:18:08.333097+00:00[00:18:08/34] Migration finished, current version: 146
2023-05-01 00:18:08.389832+00:00[01-May-2023 08:18:08] ERROR: unable to bind listening socket for address '/run/php/php7.4-fpm.sock': No such file or directory (2)
2023-05-01 00:18:08.389855+00:00[01-May-2023 08:18:08] ERROR: FPM initialization failed
2023-05-01 00:18:08.747053+00:00sudo: unable to send audit message: Operation not permitted
2023-05-01 00:18:08.777722+00:00[00:18:08/54] Installing shutdown handlers
2023-05-01 00:18:08.777749+00:00[00:18:08/54] Spawned child process with PID 57 for task 0.
2023-05-01 00:18:08.778055+00:00[00:18:08/54] Spawned child process with PID 59 for task 1.
2023-05-01 00:18:08.810971+00:00[00:18:08/60] Using task id 0
2023-05-01 00:18:08.810993+00:00[00:18:08/60] Lock: update_daemon-57.lock
2023-05-01 00:18:08.811036+00:00[00:18:08/60] Waiting before update (0)...
2023-05-01 00:18:08.811050+00:00[00:18:08/62] Using task id 1
2023-05-01 00:18:08.811053+00:00[00:18:08/62] Lock: update_daemon-59.lock
2023-05-01 00:18:08.811103+00:00[00:18:08/62] Waiting before update (5)...
2023-05-01 00:18:08.814778+00:00[00:18:08/60] Scheduled 0 feeds to update...
2023-05-01 00:18:08.814987+00:00[00:18:08/60] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:18:08.815256+00:00[00:18:08/60] All done.
2023-05-01 00:18:08.815729+00:00[00:18:08/60] Expired cache/export: removed 0 files.
2023-05-01 00:18:08.815755+00:00[00:18:08/60] Expired cache/feeds: removed 0 files.
2023-05-01 00:18:08.815772+00:00[00:18:08/60] Expired cache/images: removed 0 files.
2023-05-01 00:18:08.815795+00:00[00:18:08/60] Expired cache/upload: removed 0 files.
2023-05-01 00:18:08.815874+00:00[00:18:08/60] Removed 0 old lock files.
2023-05-01 00:18:08.815878+00:00[00:18:08/60] Removing old error log entries...
2023-05-01 00:18:08.818450+00:00[00:18:08/60] Purged 0 orphaned posts.
2023-05-01 00:18:09.825483+00:00[00:18:09/54] Child process with PID 57 reaped.
2023-05-01 00:18:09.825576+00:00[00:18:09/54] Received SIGCHLD, 1 active tasks left.
2023-05-01 00:18:13.814858+00:00[00:18:13/62] Scheduled 0 feeds to update...
2023-05-01 00:18:13.815008+00:00[00:18:13/62] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:18:13.815215+00:00[00:18:13/62] All done.
2023-05-01 00:18:14.820709+00:00[00:18:14/54] Child process with PID 59 reaped.
2023-05-01 00:18:14.820725+00:00[00:18:14/54] Received SIGCHLD, 0 active tasks left.
2023-05-01 00:19:08.825443+00:00[00:19:08/54] 0 active tasks, next spawn at 60 sec.
2023-05-01 00:20:08.831134+00:00[00:20:08/54] 0 active tasks, next spawn at 0 sec.
2023-05-01 00:20:09.831955+00:00[00:20:09/54] Spawned child process with PID 67 for task 0.
2023-05-01 00:20:09.832661+00:00[00:20:09/54] Spawned child process with PID 69 for task 1.
2023-05-01 00:20:09.879282+00:00[00:20:09/72] Using task id 0
2023-05-01 00:20:09.879314+00:00[00:20:09/72] Lock: update_daemon-67.lock
2023-05-01 00:20:09.879319+00:00[00:20:09/71] Using task id 1
2023-05-01 00:20:09.879325+00:00[00:20:09/71] Lock: update_daemon-69.lock
2023-05-01 00:20:09.879366+00:00[00:20:09/72] Waiting before update (0)...
2023-05-01 00:20:09.879400+00:00[00:20:09/71] Waiting before update (5)...
2023-05-01 00:20:09.883248+00:00[00:20:09/72] Scheduled 0 feeds to update...
2023-05-01 00:20:09.883439+00:00[00:20:09/72] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:20:09.883655+00:00[00:20:09/72] All done.
2023-05-01 00:20:09.884073+00:00[00:20:09/72] Expired cache/export: removed 0 files.
2023-05-01 00:20:09.884101+00:00[00:20:09/72] Expired cache/feeds: removed 0 files.
2023-05-01 00:20:09.884113+00:00[00:20:09/72] Expired cache/images: removed 0 files.
2023-05-01 00:20:09.884134+00:00[00:20:09/72] Expired cache/upload: removed 0 files.
2023-05-01 00:20:09.884232+00:00[00:20:09/72] Removed 0 old lock files.
2023-05-01 00:20:09.884239+00:00[00:20:09/72] Removing old error log entries...
2023-05-01 00:20:09.886217+00:00[00:20:09/72] Purged 0 orphaned posts.
2023-05-01 00:20:10.893196+00:00[00:20:10/54] Child process with PID 67 reaped.
2023-05-01 00:20:10.893263+00:00[00:20:10/54] Received SIGCHLD, 1 active tasks left.
2023-05-01 00:20:14.883817+00:00[00:20:14/71] Scheduled 0 feeds to update...
2023-05-01 00:20:14.884032+00:00[00:20:14/71] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:20:14.884339+00:00[00:20:14/71] All done.
2023-05-01 00:20:15.892166+00:00[00:20:15/54] Child process with PID 69 reaped.
2023-05-01 00:20:15.892198+00:00[00:20:15/54] Received SIGCHLD, 0 active tasks left.
2023-05-01 00:21:09.897034+00:00[00:21:09/54] 0 active tasks, next spawn at 60 sec.
2023-05-01 00:22:09.902453+00:00[00:22:09/54] 0 active tasks, next spawn at 0 sec.
2023-05-01 00:22:10.903299+00:00[00:22:10/54] Spawned child process with PID 77 for task 0.
2023-05-01 00:22:10.903861+00:00[00:22:10/54] Spawned child process with PID 78 for task 1.
2023-05-01 00:22:10.953483+00:00[00:22:10/82] Using task id 1
2023-05-01 00:22:10.953511+00:00[00:22:10/82] Lock: update_daemon-78.lock
2023-05-01 00:22:10.953515+00:00[00:22:10/81] Using task id 0
2023-05-01 00:22:10.953520+00:00[00:22:10/81] Lock: update_daemon-77.lock
2023-05-01 00:22:10.953655+00:00[00:22:10/82] Waiting before update (5)...
2023-05-01 00:22:10.953669+00:00[00:22:10/81] Waiting before update (0)...
2023-05-01 00:22:10.957634+00:00[00:22:10/81] Scheduled 0 feeds to update...
2023-05-01 00:22:10.957839+00:00[00:22:10/81] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:22:10.958048+00:00[00:22:10/81] All done.
2023-05-01 00:22:10.958496+00:00[00:22:10/81] Expired cache/export: removed 0 files.
2023-05-01 00:22:10.958521+00:00[00:22:10/81] Expired cache/feeds: removed 0 files.
2023-05-01 00:22:10.958535+00:00[00:22:10/81] Expired cache/images: removed 0 files.
2023-05-01 00:22:10.958553+00:00[00:22:10/81] Expired cache/upload: removed 0 files.
2023-05-01 00:22:10.958655+00:00[00:22:10/81] Removed 0 old lock files.
2023-05-01 00:22:10.958663+00:00[00:22:10/81] Removing old error log entries...
2023-05-01 00:22:10.960525+00:00[00:22:10/81] Purged 0 orphaned posts.
2023-05-01 00:22:11.969126+00:00[00:22:11/54] Child process with PID 77 reaped.
2023-05-01 00:22:11.969202+00:00[00:22:11/54] Received SIGCHLD, 1 active tasks left.
2023-05-01 00:22:15.958848+00:00[00:22:15/82] Scheduled 0 feeds to update...
2023-05-01 00:22:15.959073+00:00[00:22:15/82] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:22:15.959364+00:00[00:22:15/82] All done.
2023-05-01 00:22:16.968390+00:00[00:22:16/54] Child process with PID 78 reaped.
2023-05-01 00:22:16.968437+00:00[00:22:16/54] Received SIGCHLD, 0 active tasks left.
2023-05-01 00:23:10.972982+00:00[00:23:10/54] 0 active tasks, next spawn at 60 sec.
2023-05-01 00:24:10.978234+00:00[00:24:10/54] 0 active tasks, next spawn at 0 sec.
2023-05-01 00:24:11.979330+00:00[00:24:11/54] Spawned child process with PID 87 for task 0.
2023-05-01 00:24:11.979946+00:00[00:24:11/54] Spawned child process with PID 89 for task 1.
2023-05-01 00:24:12.020365+00:00[00:24:12/91] Using task id 0
2023-05-01 00:24:12.020388+00:00[00:24:12/92] Using task id 1
2023-05-01 00:24:12.020392+00:00[00:24:12/92] Lock: update_daemon-89.lock
2023-05-01 00:24:12.020395+00:00[00:24:12/91] Lock: update_daemon-87.lock
2023-05-01 00:24:12.020459+00:00[00:24:12/92] Waiting before update (5)...
2023-05-01 00:24:12.020476+00:00[00:24:12/91] Waiting before update (0)...
2023-05-01 00:24:12.023790+00:00[00:24:12/91] Scheduled 0 feeds to update...
2023-05-01 00:24:12.023952+00:00[00:24:12/91] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:24:12.024152+00:00[00:24:12/91] All done.
2023-05-01 00:24:12.024532+00:00[00:24:12/91] Expired cache/export: removed 0 files.
2023-05-01 00:24:12.024546+00:00[00:24:12/91] Expired cache/feeds: removed 0 files.
2023-05-01 00:24:12.024562+00:00[00:24:12/91] Expired cache/images: removed 0 files.
2023-05-01 00:24:12.024578+00:00[00:24:12/91] Expired cache/upload: removed 0 files.
2023-05-01 00:24:12.024657+00:00[00:24:12/91] Removed 0 old lock files.
2023-05-01 00:24:12.024662+00:00[00:24:12/91] Removing old error log entries...
2023-05-01 00:24:12.026621+00:00[00:24:12/91] Purged 0 orphaned posts.
2023-05-01 00:24:13.034508+00:00[00:24:13/54] Child process with PID 87 reaped.
2023-05-01 00:24:13.034603+00:00[00:24:13/54] Received SIGCHLD, 1 active tasks left.
2023-05-01 00:24:17.024729+00:00[00:24:17/92] Scheduled 0 feeds to update...
2023-05-01 00:24:17.024845+00:00[00:24:17/92] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:24:17.025174+00:00[00:24:17/92] All done.
2023-05-01 00:24:18.033893+00:00[00:24:18/54] Child process with PID 89 reaped.
2023-05-01 00:24:18.033925+00:00[00:24:18/54] Received SIGCHLD, 0 active tasks left.
2023-05-01 00:25:11.038573+00:00[00:25:11/54] 0 active tasks, next spawn at 60 sec.
2023-05-01 00:26:11.043574+00:00[00:26:11/54] 0 active tasks, next spawn at 0 sec.
2023-05-01 00:26:12.044642+00:00[00:26:12/54] Spawned child process with PID 97 for task 0.
2023-05-01 00:26:12.045047+00:00[00:26:12/54] Spawned child process with PID 98 for task 1.
2023-05-01 00:26:12.100464+00:00[00:26:12/101] Using task id 1
2023-05-01 00:26:12.100485+00:00[00:26:12/102] Using task id 0
2023-05-01 00:26:12.100489+00:00[00:26:12/101] Lock: update_daemon-98.lock
2023-05-01 00:26:12.100498+00:00[00:26:12/102] Lock: update_daemon-97.lock
2023-05-01 00:26:12.100570+00:00[00:26:12/102] Waiting before update (0)...
2023-05-01 00:26:12.100597+00:00[00:26:12/101] Waiting before update (5)...
2023-05-01 00:26:12.104122+00:00[00:26:12/102] Scheduled 0 feeds to update...
2023-05-01 00:26:12.104341+00:00[00:26:12/102] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:26:12.104547+00:00[00:26:12/102] All done.
2023-05-01 00:26:12.104960+00:00[00:26:12/102] Expired cache/export: removed 0 files.
2023-05-01 00:26:12.104982+00:00[00:26:12/102] Expired cache/feeds: removed 0 files.
2023-05-01 00:26:12.104997+00:00[00:26:12/102] Expired cache/images: removed 0 files.
2023-05-01 00:26:12.105016+00:00[00:26:12/102] Expired cache/upload: removed 0 files.
2023-05-01 00:26:12.105103+00:00[00:26:12/102] Removed 0 old lock files.
2023-05-01 00:26:12.105110+00:00[00:26:12/102] Removing old error log entries...
2023-05-01 00:26:12.107056+00:00[00:26:12/102] Purged 0 orphaned posts.
2023-05-01 00:26:13.115196+00:00[00:26:13/54] Child process with PID 97 reaped.
2023-05-01 00:26:13.115279+00:00[00:26:13/54] Received SIGCHLD, 1 active tasks left.
2023-05-01 00:26:17.105093+00:00[00:26:17/101] Scheduled 0 feeds to update...
2023-05-01 00:26:17.105246+00:00[00:26:17/101] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:26:17.105447+00:00[00:26:17/101] All done.
2023-05-01 00:26:18.113266+00:00[00:26:18/54] Child process with PID 98 reaped.
2023-05-01 00:26:18.113300+00:00[00:26:18/54] Received SIGCHLD, 0 active tasks left.
2023-05-01 00:27:12.117954+00:00[00:27:12/54] 0 active tasks, next spawn at 60 sec.
2023-05-01 00:28:12.122769+00:00[00:28:12/54] 0 active tasks, next spawn at 0 sec.
2023-05-01 00:28:13.123799+00:00[00:28:13/54] Spawned child process with PID 107 for task 0.
2023-05-01 00:28:13.124232+00:00[00:28:13/54] Spawned child process with PID 109 for task 1.
2023-05-01 00:28:13.158911+00:00[00:28:13/112] Using task id 1
2023-05-01 00:28:13.158933+00:00[00:28:13/112] Lock: update_daemon-109.lock
2023-05-01 00:28:13.158945+00:00[00:28:13/111] Using task id 0
2023-05-01 00:28:13.158949+00:00[00:28:13/111] Lock: update_daemon-107.lock
2023-05-01 00:28:13.158998+00:00[00:28:13/112] Waiting before update (5)...
2023-05-01 00:28:13.159024+00:00[00:28:13/111] Waiting before update (0)...
2023-05-01 00:28:13.162291+00:00[00:28:13/111] Scheduled 0 feeds to update...
2023-05-01 00:28:13.162487+00:00[00:28:13/111] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:28:13.162696+00:00[00:28:13/111] All done.
2023-05-01 00:28:13.163111+00:00[00:28:13/111] Expired cache/export: removed 0 files.
2023-05-01 00:28:13.163128+00:00[00:28:13/111] Expired cache/feeds: removed 0 files.
2023-05-01 00:28:13.163147+00:00[00:28:13/111] Expired cache/images: removed 0 files.
2023-05-01 00:28:13.163167+00:00[00:28:13/111] Expired cache/upload: removed 0 files.
2023-05-01 00:28:13.163255+00:00[00:28:13/111] Removed 0 old lock files.
2023-05-01 00:28:13.163260+00:00[00:28:13/111] Removing old error log entries...
2023-05-01 00:28:13.165269+00:00[00:28:13/111] Purged 0 orphaned posts.
2023-05-01 00:28:14.173257+00:00[00:28:14/54] Child process with PID 107 reaped.
2023-05-01 00:28:14.173355+00:00[00:28:14/54] Received SIGCHLD, 1 active tasks left.
2023-05-01 00:28:18.163208+00:00[00:28:18/112] Scheduled 0 feeds to update...
2023-05-01 00:28:18.163419+00:00[00:28:18/112] Sending digests, batch of max 15 users, headline limit = 1000
2023-05-01 00:28:18.163774+00:00[00:28:18/112] All done.
2023-05-01 00:28:19.171249+00:00[00:28:19/54] Child process with PID 109 reaped.
2023-05-01 00:28:19.171290+00:00[00:28:19/54] Received SIGCHLD, 0 active tasks left.

### Application Configuration

Everything left as default except for the following:

TTRSS Self URL Path changed from blank to https://rss.rayherring.net

Enable Ingress changed from false to true

Hostname set to rss.rayherring.net
Path added as /

TLS-Settings added
Certificate Host set to rss.rayherring.net
Use Cert-Manager clusterIssuer set to cert-manager

Nothing else changed.

Ingress settings are the same as I have been doing for other apps, no issues there.

### Describe the bug

After deployment, if I go to https://rss.rayherring.net, the resultant page just has '502 Bad Gateway'.

### To Reproduce

Deploy tt-rss from the catalogue, wait for deployment to finish, then try and navigate to the site.

### Expected Behavior

I should be presented with the tt-rss webUI.

### Screenshots

![image](https://user-images.githubusercontent.com/1343103/235383797-4313817e-adbd-47c2-9099-b94e0a30d147.png)

### Additional Context

Application Name:tt-rss Pod Name:tt-rss-77444564bc-j5vsp Container Name:tt-rss


has the following line in it

2023-05-01 00:18:08.389832+00:00[01-May-2023 08:18:08] ERROR: unable to bind listening socket for address '/run/php/php7.4-fpm.sock': No such file or directory (2) 2023-05-01 00:18:08.389855+00:00[01-May-2023 08:18:08] ERROR: FPM initialization failed



Connecting to that pods shell, if I look in /run, there is no 'php' directory, and if I do ```find / -name php7.4-fpm.sock``` it doesn't exist at all.

### I've read and agree with the following

- [X] I've checked all open and closed issues and my issue is not there.
ray73864 commented 1 year ago

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-rw-cdb5b888f-qmjf2 Container Name:pgbouncer

2023-05-01 00:17:06.761348+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"Starting CloudNativePG PgBouncer Instance Manager","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:17:06.819445+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/pgbouncer.ini"}
2023-05-01 00:17:06.822372+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/pg_hba.conf"}
2023-05-01 00:17:06.825059+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/server-tls/ca.crt"}
2023-05-01 00:17:06.827623+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/client-ca/ca.crt"}
2023-05-01 00:17:06.830298+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/server-tls/tls.crt"}
2023-05-01 00:17:06.832973+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/server-tls/tls.key"}
2023-05-01 00:17:06.835518+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/authUser/tls.crt"}
2023-05-01 00:17:06.838181+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/authUser/tls.key"}
2023-05-01 00:17:06.846080+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.845 UTC","pid":"20","level":"LOG","msg":"kernel file descriptor limit: 1048576 (hard: 1048576); max_client_conn: 1000, max expected fd use: 1012"}}
2023-05-01 00:17:06.846276+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"20","level":"LOG","msg":"listening on 0.0.0.0:5432"}}
2023-05-01 00:17:06.846290+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"20","level":"LOG","msg":"listening on [::]:5432"}}
2023-05-01 00:17:06.846341+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"20","level":"LOG","msg":"listening on unix:/controller/run/.s.PGSQL.5432"}}
2023-05-01 00:17:06.846357+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"20","level":"LOG","msg":"process up: PgBouncer 1.18.0, libevent 2.1.8-stable (epoll), adns: udns 0.4, tls: OpenSSL 1.1.1n  15 Mar 2022"}}

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-rw-cdb5b888f-qmjf2 Container Name:bootstrap-controller

2023-05-01 00:16:15.101275+00:00{"level":"info","ts":"2023-05-01T00:16:15Z","msg":"Installing the manager executable","destination":"/controller/manager","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:16:17.135261+00:00{"level":"info","ts":"2023-05-01T00:16:17Z","msg":"Setting 0750 permissions"}
2023-05-01 00:16:17.135283+00:00{"level":"info","ts":"2023-05-01T00:16:17Z","msg":"Bootstrap completed"}

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-rw-cdb5b888f-z5mhz Container Name:pgbouncer

2023-05-01 00:17:06.761523+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"Starting CloudNativePG PgBouncer Instance Manager","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:17:06.819432+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/server-tls/tls.crt"}
2023-05-01 00:17:06.822363+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/server-tls/tls.key"}
2023-05-01 00:17:06.825052+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/authUser/tls.crt"}
2023-05-01 00:17:06.827616+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/authUser/tls.key"}
2023-05-01 00:17:06.830291+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/pgbouncer.ini"}
2023-05-01 00:17:06.832967+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/pg_hba.conf"}
2023-05-01 00:17:06.835517+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/server-tls/ca.crt"}
2023-05-01 00:17:06.838170+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"updated configuration file","name":"/controller/configs/client-ca/ca.crt"}
2023-05-01 00:17:06.846158+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.845 UTC","pid":"19","level":"LOG","msg":"kernel file descriptor limit: 1048576 (hard: 1048576); max_client_conn: 1000, max expected fd use: 1012"}}
2023-05-01 00:17:06.846282+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"19","level":"LOG","msg":"listening on 0.0.0.0:5432"}}
2023-05-01 00:17:06.846294+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"19","level":"LOG","msg":"listening on [::]:5432"}}
2023-05-01 00:17:06.846342+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"19","level":"LOG","msg":"listening on unix:/controller/run/.s.PGSQL.5432"}}
2023-05-01 00:17:06.846361+00:00{"level":"info","ts":"2023-05-01T00:17:06Z","msg":"record","pipe":"stderr","record":{"timestamp":"2023-05-01 00:17:06.846 UTC","pid":"19","level":"LOG","msg":"process up: PgBouncer 1.18.0, libevent 2.1.8-stable (epoll), adns: udns 0.4, tls: OpenSSL 1.1.1n  15 Mar 2022"}}

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-rw-cdb5b888f-z5mhz Container Name:bootstrap-controller

2023-05-01 00:16:15.095543+00:00{"level":"info","ts":"2023-05-01T00:16:15Z","msg":"Installing the manager executable","destination":"/controller/manager","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:16:17.133944+00:00{"level":"info","ts":"2023-05-01T00:16:17Z","msg":"Setting 0750 permissions"}
2023-05-01 00:16:17.133981+00:00{"level":"info","ts":"2023-05-01T00:16:17Z","msg":"Bootstrap completed"}

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-1 Container Name:postgres

2023-05-01 00:17:50.228297+00:00{"level":"info","ts":"2023-05-01T00:17:50Z","logger":"setup","msg":"Starting CloudNativePG Instance Manager","logging_pod":"tt-rss-cnpg-main-1","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:17:51.537379+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"setup","msg":"starting controller-runtime manager","logging_pod":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.537688+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Starting EventSource","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","source":"kind source: *v1.Cluster"}
2023-05-01 00:17:51.537701+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Starting Controller","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
2023-05-01 00:17:51.537962+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Starting webserver","logging_pod":"tt-rss-cnpg-main-1","address":":9187"}
2023-05-01 00:17:51.537976+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Starting webserver","logging_pod":"tt-rss-cnpg-main-1","address":"localhost:8010"}
2023-05-01 00:17:51.537985+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Starting webserver","logging_pod":"tt-rss-cnpg-main-1","address":":8000"}
2023-05-01 00:17:51.604693+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Instance status probe failing","logging_pod":"tt-rss-cnpg-main-1","err":"failed to connect to `host=/controller/run user=postgres database=postgres`: dial error (dial unix /controller/run/.s.PGSQL.5432: connect: no such file or directory)"}
2023-05-01 00:17:51.638682+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Starting workers","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","worker count":1}
2023-05-01 00:17:51.639056+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:17:51.693748+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","filename":"/controller/certificates/server.crt","secret":"tt-rss-cnpg-main-server"}
2023-05-01 00:17:51.696512+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","filename":"/controller/certificates/server.key","secret":"tt-rss-cnpg-main-server"}
2023-05-01 00:17:51.700530+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","filename":"/controller/certificates/streaming_replica.crt","secret":"tt-rss-cnpg-main-replication"}
2023-05-01 00:17:51.703328+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","filename":"/controller/certificates/streaming_replica.key","secret":"tt-rss-cnpg-main-replication"}
2023-05-01 00:17:51.707113+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","filename":"/controller/certificates/client-ca.crt","secret":"tt-rss-cnpg-main-ca"}
2023-05-01 00:17:51.710948+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","filename":"/controller/certificates/server-ca.crt","secret":"tt-rss-cnpg-main-ca"}
2023-05-01 00:17:51.714091+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Installed configuration file","logging_pod":"tt-rss-cnpg-main-1","pgdata":"/var/lib/postgresql/data/pgdata","filename":"pg_hba.conf"}
2023-05-01 00:17:51.714119+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:17:51.716999+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Installed configuration file","logging_pod":"tt-rss-cnpg-main-1","pgdata":"/var/lib/postgresql/data/pgdata","filename":"custom.conf"}
2023-05-01 00:17:51.717086+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Cluster status","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","currentPrimary":"","targetPrimary":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.717101+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"First primary instance bootstrap, marking myself as primary","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","currentPrimary":"","targetPrimary":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.723519+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Extracting pg_controldata information","logging_pod":"tt-rss-cnpg-main-1","reason":"postmaster start up"}
2023-05-01 00:17:51.724662+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"pg_controldata","msg":"pg_control version number:            1300\nCatalog version number:               202209061\nDatabase system identifier:           7228001436425232409\nDatabase cluster state:               shut down\npg_control last modified:             Mon 01 May 2023 12:17:09 AM UTC\nLatest checkpoint location:           0/191C8C8\nLatest checkpoint's REDO location:    0/191C8C8\nLatest checkpoint's REDO WAL file:    000000010000000000000001\nLatest checkpoint's TimeLineID:       1\nLatest checkpoint's PrevTimeLineID:   1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID:          0:726\nLatest checkpoint's NextOID:          16386\nLatest checkpoint's NextMultiXactId:  1\nLatest checkpoint's NextMultiOffset:  0\nLatest checkpoint's oldestXID:        716\nLatest checkpoint's oldestXID's DB:   1\nLatest checkpoint's oldestActiveXID:  0\nLatest checkpoint's oldestMultiXid:   1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint:            Mon 01 May 2023 12:17:08 AM UTC\nFake LSN counter for unlogged rels:   0/3E8\nMinimum recovery ending location:     0/0\nMin recovery ending loc's timeline:   0\nBackup start location:                0/0\nBackup end location:                  0/0\nEnd-of-backup record required:        no\nwal_level setting:                    replica\nwal_log_hints setting:                off\nmax_connections setting:              100\nmax_worker_processes setting:         8\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           0\nmax_locks_per_xact setting:           64\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  8192\nBlocks per segment of large relation: 131072\nWAL block size:                       8192\nBytes per WAL segment:                16777216\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        1996\nSize of a large-object chunk:         2048\nDate/time type storage:               64-bit integers\nFloat8 argument passing:              by value\nData page checksum version:           0\nMock authentication nonce:            a2242f73221a20076d18df5af9595eb833ace467bd909e72b8c21dd4db20060e\n","pipe":"stdout","logging_pod":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.724689+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"The PID file content is wrong, deleting it and assuming it's stale","file":"/var/lib/postgresql/data/pgdata/postmaster.pid","logging_pod":"tt-rss-cnpg-main-1","err":"file does not exist","pidFileContents":""}
2023-05-01 00:17:51.725189+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"DB not available, will retry","logging_pod":"tt-rss-cnpg-main-1","err":"failed to connect to `host=/controller/run user=postgres database=postgres`: dial error (dial unix /controller/run/.s.PGSQL.5432: connect: no such file or directory)"}
2023-05-01 00:17:51.744034+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Instance is still down, will retry in 1 second","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"89aa9c6e-5a42-4a71-9d2d-c4c31da0bacb","uuid":"9c56d8bb-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.744114+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:17:51.744858+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"2023-05-01 00:17:51.744 UTC [28] LOG:  redirecting log output to logging collector process","pipe":"stderr","logging_pod":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.744881+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"2023-05-01 00:17:51.744 UTC [28] HINT:  Future log output will appear in directory \"/controller/log\".","pipe":"stderr","logging_pod":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.745109+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.744 UTC","process_id":"28","session_id":"644f052f.1c","session_line_num":"1","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"ending log output to stderr","hint":"Future log output will go to log destination \"csvlog\".","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:17:51.745123+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.744 UTC","process_id":"28","session_id":"644f052f.1c","session_line_num":"2","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:17:51.745133+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.744 UTC","process_id":"28","session_id":"644f052f.1c","session_line_num":"3","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv4 address \"0.0.0.0\", port 5432","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:17:51.745140+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.744 UTC","process_id":"28","session_id":"644f052f.1c","session_line_num":"4","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv6 address \"::\", port 5432","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:17:51.745146+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"2023-05-01 00:17:51.744 UTC [28] LOG:  ending log output to stderr","source":"/controller/log/postgres","logging_pod":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.745152+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"2023-05-01 00:17:51.744 UTC [28] HINT:  Future log output will go to log destination \"csvlog\".","source":"/controller/log/postgres","logging_pod":"tt-rss-cnpg-main-1"}
2023-05-01 00:17:51.748917+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:17:51.750799+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.750 UTC","process_id":"28","session_id":"644f052f.1c","session_line_num":"5","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on Unix socket \"/controller/run/.s.PGSQL.5432\"","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:17:51.757610+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.757 UTC","process_id":"33","session_id":"644f052f.21","session_line_num":"1","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system was shut down at 2023-05-01 00:17:09 UTC","backend_type":"startup","query_id":"0"}}
2023-05-01 00:17:51.764738+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.764 UTC","user_name":"postgres","database_name":"postgres","process_id":"34","connection_from":"[local]","session_id":"644f052f.22","session_line_num":"1","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
2023-05-01 00:17:51.765497+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","msg":"DB not available, will retry","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"4baf6899-bf4f-4eb6-86bb-b15f7a813896","uuid":"9c66e9fa-e7b5-11ed-ad8b-b6ea5f78a982","logging_pod":"tt-rss-cnpg-main-1","err":"failed to connect to `host=/controller/run user=postgres database=postgres`: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
2023-05-01 00:17:51.766084+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.765 UTC","user_name":"postgres","database_name":"postgres","process_id":"35","connection_from":"[local]","session_id":"644f052f.23","session_line_num":"1","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
2023-05-01 00:17:51.768405+00:00{"level":"info","ts":"2023-05-01T00:17:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:17:51.768 UTC","process_id":"28","session_id":"644f052f.1c","session_line_num":"6","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system is ready to accept connections","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:17:52.271432+00:00{"level":"info","ts":"2023-05-01T00:17:52Z","msg":"Readiness probe failing","logging_pod":"tt-rss-cnpg-main-1","err":"instance is not ready yet"}
2023-05-01 00:17:52.745081+00:00{"level":"info","ts":"2023-05-01T00:17:52Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:17:52.753554+00:00{"level":"info","ts":"2023-05-01T00:17:52Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:17:53.076979+00:00{"level":"info","ts":"2023-05-01T00:17:53Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:17:53.081576+00:00{"level":"info","ts":"2023-05-01T00:17:53Z","msg":"Ignore minSyncReplicas to enforce self-healing","logging_pod":"tt-rss-cnpg-main-1","syncReplicas":-1,"minSyncReplicas":0,"maxSyncReplicas":0}
2023-05-01 00:18:07.901081+00:00{"level":"info","ts":"2023-05-01T00:18:07Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:18:07.900 UTC","user_name":"tt-rss","database_name":"tt-rss","process_id":"182","connection_from":"172.16.3.59:33570","session_id":"644f053f.b6","session_line_num":"1","command_tag":"SELECT","session_start_time":"2023-05-01 00:18:07 UTC","virtual_transaction_id":"3/248","transaction_id":"0","error_severity":"ERROR","sql_state_code":"42P01","message":"relation \"ttrss_version\" does not exist","query":"select * from ttrss_version","query_pos":"15","application_name":"psql","backend_type":"client backend","query_id":"0"}}
2023-05-01 00:18:07.981958+00:00{"level":"info","ts":"2023-05-01T00:18:07Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:18:07.981 UTC","user_name":"tt-rss","database_name":"tt-rss","process_id":"184","connection_from":"172.16.3.59:33586","session_id":"644f053f.b8","session_line_num":"1","command_tag":"PARSE","session_start_time":"2023-05-01 00:18:07 UTC","virtual_transaction_id":"3/263","transaction_id":"0","error_severity":"ERROR","sql_state_code":"42P01","message":"relation \"ttrss_version\" does not exist","query":"SELECT * FROM ttrss_version","query_pos":"15","backend_type":"client backend","query_id":"0"}}
2023-05-01 00:18:29.480838+00:00{"level":"info","ts":"2023-05-01T00:18:29Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:18:29.480 UTC","process_id":"31","session_id":"644f052f.1f","session_line_num":"1","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"checkpoint starting: force wait","backend_type":"checkpointer","query_id":"0"}}
2023-05-01 00:18:29.525425+00:00{"level":"info","ts":"2023-05-01T00:18:29Z","logger":"wal-archive","msg":"Backup not configured, skip WAL archiving","logging_pod":"tt-rss-cnpg-main-1","walName":"pg_wal/000000010000000000000001","currentPrimary":"tt-rss-cnpg-main-1","targetPrimary":"tt-rss-cnpg-main-1"}
2023-05-01 00:19:02.524157+00:00{"level":"info","ts":"2023-05-01T00:19:02Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:19:02.524 UTC","process_id":"31","session_id":"644f052f.1f","session_line_num":"2","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"checkpoint complete: wrote 332 buffers (2.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=32.957 s, sync=0.067 s, total=33.044 s; sync files=152, longest=0.006 s, average=0.001 s; distance=7053 kB, estimate=7053 kB","backend_type":"checkpointer","query_id":"0"}}
2023-05-01 00:19:03.224471+00:00{"level":"info","ts":"2023-05-01T00:19:03Z","logger":"wal-archive","msg":"Backup not configured, skip WAL archiving","logging_pod":"tt-rss-cnpg-main-1","walName":"pg_wal/000000010000000000000002","currentPrimary":"tt-rss-cnpg-main-1","targetPrimary":"tt-rss-cnpg-main-1"}
2023-05-01 00:19:03.656956+00:00{"level":"info","ts":"2023-05-01T00:19:03Z","logger":"wal-archive","msg":"Backup not configured, skip WAL archiving","logging_pod":"tt-rss-cnpg-main-1","walName":"pg_wal/000000010000000000000002.00000028.backup","currentPrimary":"tt-rss-cnpg-main-1","targetPrimary":"tt-rss-cnpg-main-1"}
2023-05-01 00:23:29.660369+00:00{"level":"info","ts":"2023-05-01T00:23:29Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:23:29.657 UTC","process_id":"31","session_id":"644f052f.1f","session_line_num":"3","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"checkpoint starting: time","backend_type":"checkpointer","query_id":"0"}}
2023-05-01 00:23:34.218213+00:00{"level":"info","ts":"2023-05-01T00:23:34Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-1","record":{"log_time":"2023-05-01 00:23:34.217 UTC","process_id":"31","session_id":"644f052f.1f","session_line_num":"4","session_start_time":"2023-05-01 00:17:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"checkpoint complete: wrote 46 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=4.508 s, sync=0.036 s, total=4.561 s; sync files=14, longest=0.010 s, average=0.003 s; distance=16411 kB, estimate=16411 kB","backend_type":"checkpointer","query_id":"0"}}
2023-05-01 00:24:04.005718+00:00{"level":"info","ts":"2023-05-01T00:24:04Z","logger":"wal-archive","msg":"Backup not configured, skip WAL archiving","logging_pod":"tt-rss-cnpg-main-1","walName":"pg_wal/000000010000000000000003","currentPrimary":"tt-rss-cnpg-main-1","targetPrimary":"tt-rss-cnpg-main-1"}

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-1 Container Name:bootstrap-controller

2023-05-01 00:17:34.158060+00:00{"level":"info","ts":"2023-05-01T00:17:34Z","msg":"Installing the manager executable","destination":"/controller/manager","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:17:35.150391+00:00{"level":"info","ts":"2023-05-01T00:17:35Z","msg":"Setting 0750 permissions"}
2023-05-01 00:17:35.150544+00:00{"level":"info","ts":"2023-05-01T00:17:35Z","msg":"Bootstrap completed"}

Application Name:tt-rss Pod Name:tt-rss-77444564bc-j5vsp Container Name:tt-rss-system-cnpg-wait

2023-05-01 00:16:30.676288+00:00Executing DB waits...
2023-05-01 00:16:30.676311+00:00Detected RW pooler, testing RW pooler availability...
2023-05-01 00:16:30.676316+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:16:30.702139+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:16:35.704367+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:16:35.720461+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:16:40.720972+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:16:40.738165+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:16:45.738978+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:16:45.780375+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:16:50.781077+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:16:50.797145+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:16:55.797653+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:16:55.813368+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:00.813963+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:00.830236+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:05.831087+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:05.846591+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:10.847095+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:10.863888+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:15.864496+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:15.880790+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:20.881371+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:20.897314+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:25.898020+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:25.915000+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:30.915655+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:30.932700+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:35.933623+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:35.949335+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:40.949965+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:40.965896+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:45.966487+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:45.983810+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:50.984587+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:51.001755+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:17:56.002386+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:17:56.019274+00:00tt-rss-cnpg-main-rw:5432 - no response
2023-05-01 00:18:01.019792+00:00Testing database on url:  tt-rss-cnpg-main-rw
2023-05-01 00:18:01.048422+00:00tt-rss-cnpg-main-rw:5432 - accepting connections

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-2 Container Name:postgres

2023-05-01 00:19:50.244692+00:00{"level":"info","ts":"2023-05-01T00:19:50Z","logger":"setup","msg":"Starting CloudNativePG Instance Manager","logging_pod":"tt-rss-cnpg-main-2","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:19:51.554723+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"setup","msg":"starting controller-runtime manager","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.555242+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Starting EventSource","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","source":"kind source: *v1.Cluster"}
2023-05-01 00:19:51.555271+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Starting Controller","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
2023-05-01 00:19:51.555290+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Starting webserver","logging_pod":"tt-rss-cnpg-main-2","address":":9187"}
2023-05-01 00:19:51.555300+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Starting webserver","logging_pod":"tt-rss-cnpg-main-2","address":":8000"}
2023-05-01 00:19:51.555311+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Starting webserver","logging_pod":"tt-rss-cnpg-main-2","address":"localhost:8010"}
2023-05-01 00:19:51.655707+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Starting workers","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","worker count":1}
2023-05-01 00:19:51.656038+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Readiness probe failing","logging_pod":"tt-rss-cnpg-main-2","err":"instance is not ready yet"}
2023-05-01 00:19:51.709152+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"ae292db0-2e0d-4901-b118-edd93bb27b1d","uuid":"e3e00060-e7b5-11ed-817e-3eb70d9f8da1","logging_pod":"tt-rss-cnpg-main-2","filename":"/controller/certificates/server.crt","secret":"tt-rss-cnpg-main-server"}
2023-05-01 00:19:51.711982+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"ae292db0-2e0d-4901-b118-edd93bb27b1d","uuid":"e3e00060-e7b5-11ed-817e-3eb70d9f8da1","logging_pod":"tt-rss-cnpg-main-2","filename":"/controller/certificates/server.key","secret":"tt-rss-cnpg-main-server"}
2023-05-01 00:19:51.716168+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"ae292db0-2e0d-4901-b118-edd93bb27b1d","uuid":"e3e00060-e7b5-11ed-817e-3eb70d9f8da1","logging_pod":"tt-rss-cnpg-main-2","filename":"/controller/certificates/streaming_replica.crt","secret":"tt-rss-cnpg-main-replication"}
2023-05-01 00:19:51.719009+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"ae292db0-2e0d-4901-b118-edd93bb27b1d","uuid":"e3e00060-e7b5-11ed-817e-3eb70d9f8da1","logging_pod":"tt-rss-cnpg-main-2","filename":"/controller/certificates/streaming_replica.key","secret":"tt-rss-cnpg-main-replication"}
2023-05-01 00:19:51.723278+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"ae292db0-2e0d-4901-b118-edd93bb27b1d","uuid":"e3e00060-e7b5-11ed-817e-3eb70d9f8da1","logging_pod":"tt-rss-cnpg-main-2","filename":"/controller/certificates/client-ca.crt","secret":"tt-rss-cnpg-main-ca"}
2023-05-01 00:19:51.727672+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Refreshed configuration file","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"ae292db0-2e0d-4901-b118-edd93bb27b1d","uuid":"e3e00060-e7b5-11ed-817e-3eb70d9f8da1","logging_pod":"tt-rss-cnpg-main-2","filename":"/controller/certificates/server-ca.crt","secret":"tt-rss-cnpg-main-ca"}
2023-05-01 00:19:51.730790+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Updated replication settings in postgresql.auto.conf file","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.733901+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Extracting pg_controldata information","logging_pod":"tt-rss-cnpg-main-2","reason":"postmaster start up"}
2023-05-01 00:19:51.735243+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"pg_controldata","msg":"pg_control version number:            1300\nCatalog version number:               202209061\nDatabase system identifier:           7228001436425232409\nDatabase cluster state:               in production\npg_control last modified:             Mon 01 May 2023 12:19:02 AM UTC\nLatest checkpoint location:           0/2053B90\nLatest checkpoint's REDO location:    0/2000028\nLatest checkpoint's REDO WAL file:    000000010000000000000002\nLatest checkpoint's TimeLineID:       1\nLatest checkpoint's PrevTimeLineID:   1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID:          0:743\nLatest checkpoint's NextOID:          24578\nLatest checkpoint's NextMultiXactId:  1\nLatest checkpoint's NextMultiOffset:  0\nLatest checkpoint's oldestXID:        716\nLatest checkpoint's oldestXID's DB:   1\nLatest checkpoint's oldestActiveXID:  743\nLatest checkpoint's oldestMultiXid:   1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint:            Mon 01 May 2023 12:18:29 AM UTC\nFake LSN counter for unlogged rels:   0/3E8\nMinimum recovery ending location:     0/0\nMin recovery ending loc's timeline:   0\nBackup start location:                0/0\nBackup end location:                  0/0\nEnd-of-backup record required:        no\nwal_level setting:                    logical\nwal_log_hints setting:                on\nmax_connections setting:              100\nmax_worker_processes setting:         32\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           0\nmax_locks_per_xact setting:           64\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  8192\nBlocks per segment of large relation: 131072\nWAL block size:                       8192\nBytes per WAL segment:                16777216\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        1996\nSize of a large-object chunk:         2048\nDate/time type storage:               64-bit integers\nFloat8 argument passing:              by value\nData page checksum version:           0\nMock authentication nonce:            a2242f73221a20076d18df5af9595eb833ace467bd909e72b8c21dd4db20060e\n","pipe":"stdout","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.735315+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"The PID file content is wrong, deleting it and assuming it's stale","file":"/var/lib/postgresql/data/pgdata/postmaster.pid","logging_pod":"tt-rss-cnpg-main-2","err":"file does not exist","pidFileContents":""}
2023-05-01 00:19:51.750970+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"2023-05-01 00:19:51.750 UTC [24] LOG:  redirecting log output to logging collector process","pipe":"stderr","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.750990+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"2023-05-01 00:19:51.750 UTC [24] HINT:  Future log output will appear in directory \"/controller/log\".","pipe":"stderr","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.751272+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:51.750 UTC","process_id":"24","session_id":"644f05a7.18","session_line_num":"1","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"ending log output to stderr","hint":"Future log output will go to log destination \"csvlog\".","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:19:51.751288+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:51.750 UTC","process_id":"24","session_id":"644f05a7.18","session_line_num":"2","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:19:51.751297+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:51.751 UTC","process_id":"24","session_id":"644f05a7.18","session_line_num":"3","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv4 address \"0.0.0.0\", port 5432","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:19:51.751308+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:51.751 UTC","process_id":"24","session_id":"644f05a7.18","session_line_num":"4","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv6 address \"::\", port 5432","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:19:51.751317+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"2023-05-01 00:19:51.750 UTC [24] LOG:  ending log output to stderr","source":"/controller/log/postgres","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.751324+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"2023-05-01 00:19:51.750 UTC [24] HINT:  Future log output will go to log destination \"csvlog\".","source":"/controller/log/postgres","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.751907+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","msg":"Instance is still down, will retry in 1 second","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"tt-rss-cnpg-main","namespace":"ix-tt-rss"},"namespace":"ix-tt-rss","name":"tt-rss-cnpg-main","reconcileID":"ae292db0-2e0d-4901-b118-edd93bb27b1d","uuid":"e3e00060-e7b5-11ed-817e-3eb70d9f8da1","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.756437+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:51.756 UTC","process_id":"24","session_id":"644f05a7.18","session_line_num":"5","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on Unix socket \"/controller/run/.s.PGSQL.5432\"","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:19:51.765897+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:51.765 UTC","process_id":"28","session_id":"644f05a7.1c","session_line_num":"1","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system was interrupted; last known up at 2023-05-01 00:19:02 UTC","backend_type":"startup","query_id":"0"}}
2023-05-01 00:19:51.795379+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"wal-restore","msg":"tried restoring WALs, but no backup was configured","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:51.896348+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:51.896 UTC","process_id":"28","session_id":"644f05a7.1c","session_line_num":"2","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"entering standby mode","backend_type":"startup","query_id":"0"}}
2023-05-01 00:19:51.910023+00:00{"level":"info","ts":"2023-05-01T00:19:51Z","logger":"wal-restore","msg":"tried restoring WALs, but no backup was configured","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:52.018429+00:00{"level":"info","ts":"2023-05-01T00:19:52Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:52.018 UTC","process_id":"28","session_id":"644f05a7.1c","session_line_num":"3","session_start_time":"2023-05-01 00:19:51 UTC","virtual_transaction_id":"1/0","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"redo starts at 0/2000028","backend_type":"startup","query_id":"0"}}
2023-05-01 00:19:52.033764+00:00{"level":"info","ts":"2023-05-01T00:19:52Z","logger":"wal-restore","msg":"tried restoring WALs, but no backup was configured","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:52.137961+00:00{"level":"info","ts":"2023-05-01T00:19:52Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:52.137 UTC","process_id":"28","session_id":"644f05a7.1c","session_line_num":"4","session_start_time":"2023-05-01 00:19:51 UTC","virtual_transaction_id":"1/0","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"consistent recovery state reached at 0/2053C30","backend_type":"startup","query_id":"0"}}
2023-05-01 00:19:52.137990+00:00{"level":"info","ts":"2023-05-01T00:19:52Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:52.137 UTC","process_id":"24","session_id":"644f05a7.18","session_line_num":"6","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system is ready to accept read-only connections","backend_type":"postmaster","query_id":"0"}}
2023-05-01 00:19:52.151780+00:00{"level":"info","ts":"2023-05-01T00:19:52Z","logger":"wal-restore","msg":"tried restoring WALs, but no backup was configured","logging_pod":"tt-rss-cnpg-main-2"}
2023-05-01 00:19:52.257230+00:00{"level":"info","ts":"2023-05-01T00:19:52Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:19:52.257 UTC","process_id":"94","session_id":"644f05a8.5e","session_line_num":"1","session_start_time":"2023-05-01 00:19:52 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"started streaming WAL from primary at 0/3000000 on timeline 1","backend_type":"walreceiver","query_id":"0"}}
2023-05-01 00:24:51.863730+00:00{"level":"info","ts":"2023-05-01T00:24:51Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:24:51.861 UTC","process_id":"26","session_id":"644f05a7.1a","session_line_num":"1","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"restartpoint starting: time","backend_type":"checkpointer","query_id":"0"}}
2023-05-01 00:24:56.429649+00:00{"level":"info","ts":"2023-05-01T00:24:56Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:24:56.429 UTC","process_id":"26","session_id":"644f05a7.1a","session_line_num":"2","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"restartpoint complete: wrote 47 buffers (0.3%); 1 WAL file(s) added, 0 removed, 0 recycled; write=4.513 s, sync=0.029 s, total=4.568 s; sync files=14, longest=0.006 s, average=0.003 s; distance=16411 kB, estimate=16411 kB","backend_type":"checkpointer","query_id":"0"}}
2023-05-01 00:24:56.429686+00:00{"level":"info","ts":"2023-05-01T00:24:56Z","logger":"postgres","msg":"record","logging_pod":"tt-rss-cnpg-main-2","record":{"log_time":"2023-05-01 00:24:56.429 UTC","process_id":"26","session_id":"644f05a7.1a","session_line_num":"3","session_start_time":"2023-05-01 00:19:51 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"recovery restart point at 0/3006F88","detail":"Last completed transaction was at log time 2023-05-01 00:18:41.88056+00.","backend_type":"checkpointer","query_id":"0"}}

Application Name:tt-rss Pod Name:tt-rss-cnpg-main-2 Container Name:bootstrap-controller

2023-05-01 00:19:34.879384+00:00{"level":"info","ts":"2023-05-01T00:19:34Z","msg":"Installing the manager executable","destination":"/controller/manager","version":"1.19.0","build":{"Version":"1.19.0","Commit":"d9bf88dd","Date":"2023-02-14"}}
2023-05-01 00:19:35.737699+00:00{"level":"info","ts":"2023-05-01T00:19:35Z","msg":"Setting 0750 permissions"}
2023-05-01 00:19:35.737724+00:00{"level":"info","ts":"2023-05-01T00:19:35Z","msg":"Bootstrap completed"}
ctag commented 1 year ago

I believe I'm seeing the same issue.

If I open a shell in the main tt-rss app and run mkdir /var/run/php && service php7.4-fpm restart php-fpm appears to work.

But then I run into a separate error about the database: image

THE-ORONCO commented 1 year ago

I'm probably experiencing the same issue.

Something fishy seems to be going on with the databases in general. Because when I tried to stop tt-rss both database pods got stuck in the Terminating-State.

Nonetheless, I found a temporary fix the issue that appears after creating the php-7.4.sock file and restarting the php7.4-fpm service as described by @ctag.

I "fixed" it by creating both the /run/php/php7.4-fpm.sock file as well as the /var/run/postgresql/.s.PGSQL.5432 file and setting the ownership on both to www-data:www-data. Then I restarted the php7.4-fpm service by executing the command for starting it in the /entrypoint.sh file.

Here are the commands:

# create the php7.4-fpm socket
mkdir /run/php;
touch /run/php/php7.4-fpm.sock; 
chown www-data:www-data /run/php -R;

# create the DB connection socket
mkdir /var/run/postgresql;
touch /var/run/postgresql/.s.PGSQL.5432; 
chown www-data:www-data /var/run/postgresql -R;

# start the php7.4-fpm service via the same command as in the entrypoint.sh script
/usr/sbin/php-fpm7.4;

Clearly something is going wrong when the containers are created, but at least there is a temporary fix until the actual issue is found and fixed.

Edit: Turns out that you have to apply this fix after each update manually as it affects files that get reset on container recreation.

PrivatePuffin commented 10 months ago

By now this container is heavily outdated, so this will likely not get any attention untill that issue is resolved. But feel free to PR a fix.