timescale / timescaledb

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
https://www.timescale.com/
Other
17.41k stars 872 forks source link

Out of shared memory when running ALTER EXTENSION .. UPDATE #7130

Closed bearpaws closed 1 month ago

bearpaws commented 1 month ago

What type of bug is this?

Unexpected error

What subsystems and features are affected?

Platform/OS

What happened?

Upgrading a timescale database from 2.11.2 (pg 14.9) to 2.15.2 (pg 14.12). When updating the timescaledb extension this error occurs:

ALTER EXTENSION timescaledb UPDATE;
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
CONTEXT:  SQL statement "ALTER TABLE _timescaledb_internal.compress_hyper_1975_244622_chunk NO INHERIT _timescaledb_internal._compressed_hypertable_1975"
PL/pgSQL function inline_code_block line 26 at EXECUTE

After re-running it fails on the same _timescaledb_internal._compressed_hypertable_1975. But we increased the max_locks_per_transaction setting and then it fails on a different table. So don't think it's related to any specific chunk.

Increasing debug logging didn't offer anything more. Increased available memory too but same problem.

TimescaleDB version affected

2.15.2

PostgreSQL version used

14.12

What operating system did you use?

ubuntu docker images in kubernetes

What installation method did you use?

Docker

What platform did you run on?

On prem/Self-hosted

Relevant log output and stack trace

No response

How can we reproduce the bug?

Not sure this is externally reproducible, but in our case, we have a 2.11.2 installation, we restarted the postgres pod with the new docker image that contains 2.15.2 binaries, and upon startup ran the `ALTER EXTENSION` command.

This instance has ~5000 hypertables and ~180,000 chunks.  `shared_buffers' is currently 5GB (of 20GB total pod memory).
bearpaws commented 1 month ago

Update: after increasing max_locks_per_transaction to 10000, the ALTER EXTENSION completed successfully.

So I guess the question now is whether this is expected when dealing with a large number of tables and chunks, or is there a bug in the extension upgrade?

Generally speaking, what is the transaction strategy for an extension update? Are all table mutations made in a single transaction?

fabriziomello commented 1 month ago

@bearpaws this is expected due to the numbers of relations you have. Updating the extension will require some locks on objects that depend of the extension and it will include hypertables and chunks. Default postgres configuration have a conservative value for max_locks_per_transaction so in your case is recommended to increase this number. Even a simple pg_dump will require a high value for this parameter.