Closed ebrodie closed 1 year ago
Can you explain how this is a potential helm chart issue? That error came from which service? TimescaleDB or Patroni?
TimescaleDB logs.
I'm not fully sure it's a helm chart issue. I could not find any information on this error anywhere else, so I thought I'd give it a shot.
It might be best to open an issue there. This seems like either a Patroni or TimescaleDB error and not an issue with the helm chart itself.
ok, will do, thanks!
What did you do? No action taken, the db cluster was running as normal overnight with no unusual activity before the crash/restart.
Did you expect to see some different? Yes, the instance restarted due to a crash.
Environment Production.
Which helm chart and what version are you using?
email: support@timescale.com name: TimescaleDB name: timescaledb-single sources:
https://github.com/timescale/timescaledb-kubernetes
https://github.com/timescale/timescaledb-docker-ha
https://github.com/zalando/patroni version: 0.7.1
What is in your
values.yaml
?replicaLoadBalancer: enabled: True annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000" loadBalancer: annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"
patroni: bootstrap: method: restore_or_initdb restore_or_initdb: command: > /etc/timescaledb/scripts/restore_or_initdb.sh --encoding=UTF8 --locale=C.UTF-8 --wal-segsize=256 dcs: synchronous_mode: true master_start_timeout: 0 postgresql: use_slots: false parameters: archive_timeout: 1h checkpoint_timeout: 600s temp_file_limit: '200GB' synchronous_commit: remote_apply
synchronous_commit: local
persistentVolumes: data: size: '${ebs_vol_size}' wal: enabled: True size: '${wal_vol_size}'
timescaledbTune: enabled: true
sharedMemory: useMount: false
backup: enabled: true pgBackRest:archive-push: process-max: 4 archive-async: "y" pgBackRest:archive-get: process-max: 4 archive-async: "y" archive-get-queue-max: 2GB jobs: