Closed giz33 closed 1 year ago
Hi, I believe that this is because this configuration is used by Patroni to "boostrap" the cluster, so it only applies when creating it initially. I'm facing the same problem as you; it looks like I'll have to manually edit my postgresql.conf.
Hi!
I managed to solve my problem using these approach.
I entered on my database POD, using kubectl exec and then I extracted my patroni config file using ps -ef
.(It will be the first line).
Then I used these following commands to change my configuration:
Update the config
curl -X PATCH -d '{"postgresql":{"parameters":{"temp_file_limit":"8GB"}}}' http://localhost:8008/config
Then stop and start my database:
patronictl -c /etc/patroni/patroni.yaml restart <CLUSTER_NAME>
The above yaml you get from first line in ps -ef
inside the POD.
The cluster name you can get with this command:
patronictl -c /etc/patroni/patroni.yaml list
Alternatively I think we can customize these parameters apllying them like this on values.yaml:
timescaledbTune:
enabled: true
# For full flexibility, we allow you to override any timescaledb-tune parameter below.
# However, these parameters only take effect on newly scheduled pods and their settings are
# only visibible inside those new pods.
# Therefore you probably want to set explicit overrides in patroni.bootstrap.dcs.postgresql.parameters,
# as those will take effect as soon as possible.
# https://github.com/timescale/timescaledb-tune
args:
temp-file-limit: 8GB
After run helm upgrade to apply the updated values, but I'm not sure if this gonna work because I did not tested.
Hello, thanks for your answer!
On my end, I somewhat fixed it by editing values, doing a Helm upgrade and deleting the timescale-0 pod (last step super important). I'll use your method once I get blocked again (which happens a lot, haha)
28 févr. 2023, 20:23 de @.***:
Hi! I managed to solve my problem using these approach. I entered on my database POD, using kubectl exec and then I extracted my patroni config file using > ps -ef> .(It will be the first line). Then I used these following commands to change my configuration: Update the config
curl -X PATCH -d '{"postgresql":{"parameters":{"temp_file_limit":"8GB"}}}' http://localhost:8008/config
Then stop and start my database:
patronictl -c /etc/patroni/patroni.yaml restart
The above yaml you get from first line in > ps -ef> inside the POD.
The cluster name you can get with this command:
patronictl -c /etc/patroni/patroni.yaml list
Alternatively I think we can customize these parameters apllying them like this on values.yaml:
timescaledbTune: enabled: true # For full flexibility, we allow you to override any timescaledb-tune parameter below. # However, these parameters only take effect on newly scheduled pods and their settings are # only visibible inside those new pods. # Therefore you probably want to set explicit overrides in patroni.bootstrap.dcs.postgresql.parameters, # as those will take effect as soon as possible. # https://github.com/timescale/timescaledb-tune args: temp-file-limit: 8GB
After run helm upgrade to apply the updated values, but I'm not sure if this gonna work because I did not tested.
— Reply to this email directly, > view it on GitHub https://github.com/timescale/helm-charts/issues/562#issuecomment-1448734622> , or > unsubscribe https://github.com/notifications/unsubscribe-auth/ADKG2SQBP2PNXUJA5BOYOILWZZGCRANCNFSM6AAAAAAUJE342U> . You are receiving this because you commented.> Message ID: > <timescale/helm-charts/issues/562/1448734622> @> github> .> com>
The underlying problem is that the changes in the values.yaml
are only propagated to the ConfigMap of patroni.yaml
but are not propagated to the filesystem of the pod. So patroni never sees the updated file until the pod is restarted/deleted. Essentially a duplicate of https://github.com/timescale/helm-charts/issues/228 (check comments for details) which was reported over 2 years ago.
Hello! I'm using timescaledb-single-node installation with values.yaml altered. I inserted some parameters on patroni>bootstrap>dcs>postgresql>parameters. But this params doesn't apply inside the pod configuration files of the Postgresql. The configmap is successfully generated, but when I look inside the POD, the file /var/lib/postgresql/data/postgresql.conf has different values that I configured on my values.yaml that I use on the helm install command.
This is the snippet of my values.yaml where I set the value for temp_file_limit and wal_level:
Note that the parameters wal_level and temp_file_limit are replica and 8GB respectively.
The command that I used to upgrade my config for timescaledb was:
helm upgrade timescaledb ./timescaledb-single --values values-development.yaml --namespace timescaledb
The configmap generated succesfully my patroni.yaml config(This is just a snippet of ConfigMap):
Note again, temp_file_limit is with 8GB as I set on the values.yaml.
But when I check on the database with the command
show temp_file_limit ;
Again, the wrong value:
So, anyone know why it doesn't apply to the database the config that I set on values.yaml??
Thank you very much to anyone who help me