timescale / timescaledb

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
https://www.timescale.com/
Other
17.79k stars 885 forks source link

Upgrade to timescaleDB 2.0 failed ("shared library version mismatch") #2795

Closed AlexFavre closed 10 months ago

AlexFavre commented 3 years ago

Relevant system information:

We use patroni for clustering postgres with 2 members on learder and one replica patroni 2.0.1

Describe the bug When we try to upgrade, it's failed with this message:

ERROR: extension "timescaledb" version mismatch: shared library version 2.0.0; SQL version 1.7.4 CONTEXT: parallel worker SQL statement "SELECT sum(_ts_meta_count), count(*) FROM _timescaledb_internal.compress_hyper_8_489_chunk" PL/pgSQL function inline_code_block line 21 at EXECUTE

To Reproduce Steps to reproduce the behavior: 1- Install timescaledb RPM for 2.0 yum install timescaledb-2-postgresql-12.x86_64 2 - psql -X zabbix 3- ALTER EXTENSION timescaledb UPDATE; 4 - ERROR: extension "timescaledb" version mismatch: shared library version 2.0.0; SQL version 1.7.4 CONTEXT: parallel worker SQL statement "SELECT sum(_ts_meta_count), count(*) FROM _timescaledb_internal.compress_hyper_8_489_chunk" PL/pgSQL function inline_code_block line 21 at EXECUTE

Thanks in advance for your help

Da-Teach commented 3 years ago

I had the same issue, you have to (temporarily) set max_parallel_workers to 0. After upgrade you can revert it back to original value.

AlexFavre commented 3 years ago

It's ok now. thanks a lot

mfreed commented 3 years ago

So you were getting that error about the SELECT when you try to run ALTER EXTENSION as the first thing in a new psql context? Is this anything having to do with zabbix running jobs concurrently in the background? (I admit we should figure out how to solve it, just curious where that command is coming from?)

Thanks!

AlexFavre commented 3 years ago

Yes, first command was always 'ALTER EXTENSION timescaledb UPDATE;' in new psql session connect with 'psql -X zabbix'.

change parallel_max_worker = 0 in file reload by patroni each node connect with psql -X zabbix apply ALTER EXTENSION timescaledb UPDATE; it's ok

biggerfisch commented 3 years ago

I get this issue while using docker in a DB that has never seen Zabbix and has no other services running.

Details edit:

Have not tried a full reproduction chain, but I imagine it would follow a path of something like: installation, enabling compression, letting that run, then attempting the upgrade.

slasktrat commented 3 years ago

Had same issue and this solved it, thanks.

dwalthour commented 3 years ago

This issue also solved it for me, but the symptoms of upgrading while max_parallel_workers > 0 were very very different. For me the upgrade process ran for 24-36 hours and then failed with no clue why. When I set max_parallel_workers = 0 in the config file (suggested by Mike, thanks!), then the same update took less than a minute and completed successfully. I strongly recommend that you suggest in your documentation that people upgrading timescale set max_parallel_workers = 0 before doing the upgrade!

ali-ghasempor commented 3 years ago

Same issue on TimescaleDB v2.3.0 and changing max_parallel_workers to 0 helped!

mfundul commented 3 years ago

Probably the same issue https://github.com/timescale/timescaledb/issues/3286

Nathapat-Boss commented 3 years ago

To update from timescaledb v.2.2.1 to v.2.3.0 I need to change this parameters and it work.

max_parallel_workers = 0 #4
max_locks_per_transaction = 1024 #256

At first I change only max_parallel_workers but got error , out of shared memory then I change max_locks_per_transaction to 1024. ALTER extension timescaledb update now work.

fredericgermain commented 3 years ago

I'm not the best postgres expert, this is what I did to try to migrate timescaledev/promscale-extension 0.1.1->0.1.2-2.3.0-pg12, promscale:0.1.2 -> 0.4.1. I ended up only upgrading to promscale:0.3.0. If someone has any better idea how to do this...

(Using docker)

docker stop promscale-connector

psql SHOW max_locks_per_transaction; 64 SHOW max_parallel_workers; 2 SHOW max_parallel_workers; on alter system set max_parallel_workers = 0; alter system set max_locks_per_transaction = 30000; alter system set autovacuum = 0;

docker restart promscale-db

psql ALTER EXTENSION timescaledb UPDATE;

docker start promscale-connector docker logs promscale-connector

level=error ts=2021-05-27T18:19:23.432Z caller=runner.go:40 msg="aborting startup due to error" err="migration error: Error while trying to migrate DB: Error encountered during migration: error executing migration script: name versions/dev/0.3.1-dev/1-fix_permissions.sql, err ERROR: function _prom_catalog.get_metrics_that_need_compression() does not exist (SQLSTATE 42883)"

Going back to promscale:0.3.0 docker start promscale-connector

connector start with no error

psql alter system set max_locks_per_transaction = 64; alter system set max_parallel_workers = 2; alter system set autovacuum = 1; docker restart promscale-db

bobobo1618 commented 3 years ago

To update from timescaledb v.2.2.1 to v.2.3.0 I need to change this parameters and it work.

max_parallel_workers = 0 #4
max_locks_per_transaction = 1024 #256

At first I change only max_parallel_workers but got error , out of shared memory then I change max_locks_per_transaction to 1024. ALTER extension timescaledb update now work.

I also needed this to make the update work.

@mfundul I think your commit will fix the max_parallel_workers issue but it won't fix the max_locks_per_transaction issue.

jflambert commented 10 months ago

Having this issue. I needed to go from 2.13.0 to 2.12.2 to restore a backup. Then I can't upgrade back to 2.13.0

Except the error message is different. So maybe I should log a new issue?

psql -X -qtc "alter extension timescaledb update"
ERROR:  catalog version mismatch, expected "2.12.2" seen "2.13.0"
CONTEXT:  PL/pgSQL function inline_code_block line 7 at RAISE
jnidzwetzki commented 10 months ago

Hello @jflambert,

Thanks for bringing this to our attention. This seems to be the same issue as reported in #6496. Please check if the workaround provided in this issue also fixes the problem in your environment.

jflambert commented 10 months ago

I'm sorry I made you reopen this issue. It's definitely the other one. Looking forward to 2.13.1