For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades. The traditional method for moving data to a new major version is to dump and reload the database, though this can be slow. A faster method is pg_upgrade.
Generally speaking, it’s a bad idea to perform in-place upgrades across major versions. I think we should heavily consider allowing users to restore a snapshot into a new app. I think this is relatively common practice across vendors and would provide a safe passage for users to test the new version against their dataset, client, etc. before fully committing.
I think there are quite a few ways we could do this, but here's a rough example of what this process could look like:
Provision a new app.
Provision and attach a new volume that meets the size constraints specified by the target snapshot.
Provision and attach a second "source" volume containing the restore data.
Major upgrades
Generally speaking, it’s a bad idea to perform in-place upgrades across major versions. I think we should heavily consider allowing users to restore a snapshot into a new app. I think this is relatively common practice across vendors and would provide a safe passage for users to test the new version against their dataset, client, etc. before fully committing.
I think there are quite a few ways we could do this, but here's a rough example of what this process could look like:
It certainly requires some orchestration, so not sure how feasible this process will be in the short-to-medium term.
Reference: https://www.postgresql.org/docs/13/upgrading.html