Hello!
I'm using timescaledb-single-node installation with values.yaml altered.
I inserted some parameters on patroni>bootstrap>dcs>postgresql>parameters. But this params doesn't apply inside the pod configuration files of the Postgresql.
The configmap is successfully generated, but when I look inside the POD, the file /var/lib/postgresql/data/postgresql.conf has different values that I configured on my values.yaml that I use on the helm install command.
This is the snippet of my values.yaml where I set the value for temp_file_limit and wal_level:
log:
level: WARNING
# https://patroni.readthedocs.io/en/latest/replica_bootstrap.html#bootstrap
bootstrap:
method: restore_or_initdb
restore_or_initdb:
command: >
/etc/timescaledb/scripts/restore_or_initdb.sh
--encoding=UTF8
--locale=C.UTF-8
keep_existing_recovery_conf: true
post_init: /etc/timescaledb/scripts/post_init.sh
dcs:
loop_wait: 10
maximum_lag_on_failover: 33554432
postgresql:
parameters:
archive_command: "/etc/timescaledb/scripts/pgbackrest_archive.sh %p"
archive_mode: 'on'
archive_timeout: 1800s
#
# Autovacuuming is very important to PostgreSQL. For TimescaleDB, in
# most usecases the vacuuming part is of less importance (there are no deleted tuples to prune)
# however, the autoanalyze bit (updating the statistics of the chunks) is important to help
# in planning queries. Therefore we do some tuning of autovacuum to address these
# TimescaleDB specific concerns.
# We'd rather have autovacuum do things early, as this increases the changes that autovacuum
# will find the buffers it needs in shared_buffers, instead of having to fetch them from disk.
#
autovacuum_analyze_scale_factor: 0.02
# This allows us to auto-analyze at most 120 (pretty much empty) chunks every 5 seconds
# This will ensure that we can have up-to-date statistics on inserts very, very quickly
autovacuum_naptime: 5s
autovacuum_max_workers: 10
# We don't want vacuum work to be building up, therefore we increase
# the cost limit so that the work to be done for vacuum will be done quickly.
autovacuum_vacuum_cost_limit: 500
autovacuum_vacuum_scale_factor: 0.05
log_autovacuum_min_duration: 1min
hot_standby: 'on'
log_checkpoints: 'on'
log_connections: 'on'
log_disconnections: 'on'
log_line_prefix: "%t [%p]: [%c-%l] %u@%d,app=%a [%e] "
log_lock_waits: 'on'
log_min_duration_statement: '1s'
log_statement: ddl
log_timezone: 'UTC'
timezone: 'UTC'
max_connections: 100
max_prepared_transactions: 150
shared_preload_libraries: timescaledb,pg_stat_statements,pg_cron
ssl: 'on'
ssl_cert_file: '/etc/certificate/tls.crt'
ssl_key_file: '/etc/certificate/tls.key'
tcp_keepalives_idle: 900
tcp_keepalives_interval: 100
temp_file_limit: 8GB
timescaledb.passfile: '../.pgpass'
unix_socket_directories: "/var/run/postgresql"
unix_socket_permissions: '0750'
wal_level: hot_standby
wal_log_hints: 'on'
use_pg_rewind: true
use_slots: true
retry_timeout: 10
ttl: 30```
Note that I set the values 8GB and hot_standby to temp_file_limit and wal_level.
But look, this is my
/var/lib/postgresql/data/postgresql.conf
```# Do not edit this file manually!
# It will be overwritten by Patroni!
include 'postgresql.base.conf'
archive_command = '/etc/timescaledb/scripts/pgbackrest_archive.sh %p'
archive_mode = 'on'
archive_timeout = '1800s'
autovacuum_analyze_scale_factor = '0.02'
autovacuum_max_workers = '10'
autovacuum_naptime = '5s'
autovacuum_vacuum_cost_limit = '500'
autovacuum_vacuum_scale_factor = '0.05'
cluster_name = 'timescaledb'
hot_standby = 'on'
listen_addresses = '0.0.0.0'
log_autovacuum_min_duration = '1min'
log_checkpoints = 'on'
log_connections = 'on'
log_disconnections = 'on'
log_line_prefix = '%t [%p]: [%c-%l] %u@%d,app=%a [%e] '
log_lock_waits = 'on'
log_min_duration_statement = '1s'
log_statement = 'ddl'
log_timezone = 'UTC'
max_connections = '1000'
max_locks_per_transaction = '64'
max_prepared_transactions = '150'
max_replication_slots = '10'
max_wal_senders = '10'
max_worker_processes = '8'
port = '5432'
shared_preload_libraries = 'timescaledb,pg_stat_statements,pg_cron'
ssl = 'on'
ssl_cert_file = '/etc/certificate/tls.crt'
ssl_key_file = '/etc/certificate/tls.key'
tcp_keepalives_idle = '900'
tcp_keepalives_interval = '100'
temp_file_limit = '1GB'
timescaledb.passfile = '../.pgpass'
timezone = 'UTC'
track_commit_timestamp = 'off'
unix_socket_directories = '/var/run/postgresql'
unix_socket_permissions = '0750'
wal_keep_size = '128MB'
wal_level = 'replica'
wal_log_hints = 'on'
hba_file = '/var/lib/postgresql/data/pg_hba.conf'
ident_file = '/var/lib/postgresql/data/pg_ident.conf'
# recovery.conf
recovery_target = ''
recovery_target_lsn = ''
recovery_target_name = ''
recovery_target_time = ''
recovery_target_timeline = 'latest'
recovery_target_xid = ''
restore_command = '/etc/timescaledb/scripts/pgbackrest_archive_get.sh %f "%p"'
Note that the parameters wal_level and temp_file_limit are replica and 8GB respectively.
The command that I used to upgrade my config for timescaledb was:
Hello! I'm using timescaledb-single-node installation with values.yaml altered. I inserted some parameters on patroni>bootstrap>dcs>postgresql>parameters. But this params doesn't apply inside the pod configuration files of the Postgresql. The configmap is successfully generated, but when I look inside the POD, the file /var/lib/postgresql/data/postgresql.conf has different values that I configured on my values.yaml that I use on the helm install command.
This is the snippet of my values.yaml where I set the value for temp_file_limit and wal_level:
Note that the parameters wal_level and temp_file_limit are replica and 8GB respectively.
The command that I used to upgrade my config for timescaledb was:
helm upgrade timescaledb ./timescaledb-single --values values-development.yaml --namespace timescaledb
The configmap generated succesfully my patroni.yaml config(This is just a snippet of ConfigMap):
Note again, temp_file_limit is with 8GB as I set on the values.yaml.
But when I check on the database with the command show temp_file_limit ;
Again, the wrong value:
So, anyone know why it doesn't apply to the database the config that I set on values.yaml??
Thank you very much to anyone who help me