TandoorRecipes / recipes

Application for managing recipes, planning meals, building shopping lists and much much more!
https://docs.tandoor.dev
Other
5.34k stars 563 forks source link

Issues when trying to backup with pg_dumpall for postgres upgrade #2782

Closed maxbrueckl closed 6 months ago

maxbrueckl commented 9 months ago

Issue

As the support of Postgres version <= 11 was dropped, I started the update progress. I ran

docker exec -it recipes_db pg_dumpall -U djangouser > ~/tandoor.sql

on my existing database. The resulting tandoor.sql looks like this:

--
-- PostgreSQL database cluster dump
--

SET default_transaction_read_only = off;

SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;

--
-- Roles
--

CREATE ROLE djangouser;
ALTER ROLE djangouser WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS PASSWORD 'XXXXXXXX';

\connect template1

--
-- PostgreSQL database dump
--

-- Dumped from database version 11.22
-- Dumped by pg_dump version 11.22

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- PostgreSQL database dump complete
--

pg_dump: [archiver (db)] query failed: ERROR:  could not identify an ordering operator for type oid
LINE 1: ...ssid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3
                                                                      ^
HINT:  Use an explicit ordering operator or modify the query.
pg_dump: [archiver (db)] query was: SELECT classid, objid, refobjid FROM pg_depend WHERE refclassid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3
pg_dumpall: pg_dump failed on database "djangodb", exiting

Any idea how I can proceed?

Tandoor Version

1.5.10

OS Version

Ubuntu 22.04.3

Setup

Docker / Docker-Compose

Reverse Proxy

Traefik

Other

No response

Environment file

# only set this to true when testing/debugging
# when unset: 1 (true) - dont unset this, just for development
DEBUG=1
SQL_DEBUG=0

# hosts the application can run under e.g. recipes.mydomain.com,cooking.mydomain.com,...
ALLOWED_HOSTS=XXXX

# random secret key, use for example `base64 /dev/urandom | head -c50` to generate one
# ---------------------------- REQUIRED -------------------------
SECRET_KEY=XXXX
# ---------------------------------------------------------------

# your default timezone See https://timezonedb.com/time-zones for a list of timezones
# TIMEZONE=Europe/Berlin

# add only a database password if you want to run with the default postgres, otherwise change settings accordingly
DB_ENGINE=django.db.backends.postgresql
# DB_OPTIONS= {} # e.g. {"sslmode":"require"} to enable ssl
POSTGRES_HOST=recipes_db
POSTGRES_PORT=5432
POSTGRES_USER=djangouser
# ---------------------------- REQUIRED -------------------------
POSTGRES_PASSWORD=XXXXX
# ---------------------------------------------------------------
POSTGRES_DB=djangodb

# database connection string, when used overrides other database settings.
# format might vary depending on backend
# DATABASE_URL = engine://username:password@host:port/dbname

# the default value for the user preference 'fractions' (enable/disable fraction support)
# default: disabled=0
FRACTION_PREF_DEFAULT=0

# the default value for the user preference 'comments' (enable/disable commenting system)
# default comments enabled=1
COMMENT_PREF_DEFAULT=1

# Users can set a amount of time after which the shopping list is refreshed when they are in viewing mode
# This is the minimum interval users can set. Setting this to low will allow users to refresh very frequently which
# might cause high load on the server. (Technically they can obviously refresh as often as they want with their own scripts)
SHOPPING_MIN_AUTOSYNC_INTERVAL=5

# Default for user setting sticky navbar
# STICKY_NAV_PREF_DEFAULT=1

# If base URL is something other than just / (you are serving a subfolder in your proxy for instance http://recipe_app/recipes/)
# Be sure to not have a trailing slash: e.g. '/recipes' instead of '/recipes/'
# SCRIPT_NAME=/recipes

# If staticfiles are stored at a different location uncomment and change accordingly, MUST END IN /
# this is not required if you are just using a subfolder
# This can either be a relative path from the applications base path or the url of an external host
# STATIC_URL=/static/

# If mediafiles are stored at a different location uncomment and change accordingly, MUST END IN /
# this is not required if you are just using a subfolder
# This can either be a relative path from the applications base path or the url of an external host
# MEDIA_URL=/media/

# Serve mediafiles directly using gunicorn. Basically everyone recommends not doing this. Please use any of the examples
# provided that include an additional nxginx container to handle media file serving.
# If you know what you are doing turn this back on (1) to serve media files using djangos serve() method.
# when unset: 1 (true) - this is temporary until an appropriate amount of time has passed for everyone to migrate
GUNICORN_MEDIA=0

# S3 Media settings: store mediafiles in s3 or any compatible storage backend (e.g. minio)
# as long as S3_ACCESS_KEY is not set S3 features are disabled
# S3_ACCESS_KEY=
# S3_SECRET_ACCESS_KEY=
# S3_BUCKET_NAME=
# S3_REGION_NAME= # default none, set your region might be required
# S3_QUERYSTRING_AUTH=1 # default true, set to 0 to serve media from a public bucket without signed urls
# S3_QUERYSTRING_EXPIRE=3600 # number of seconds querystring are valid for
# S3_ENDPOINT_URL= # when using a custom endpoint like minio

# Email Settings, see https://docs.djangoproject.com/en/3.2/ref/settings/#email-host
# Required for email confirmation and password reset (automatically activates if host is set)
# EMAIL_HOST=
# EMAIL_PORT=
# EMAIL_HOST_USER=
# EMAIL_HOST_PASSWORD=
# EMAIL_USE_TLS=0
# EMAIL_USE_SSL=0
# DEFAULT_FROM_EMAIL= # email sender address (default 'webmaster@localhost')
# ACCOUNT_EMAIL_SUBJECT_PREFIX= # prefix used for account related emails (default "[Tandoor Recipes] ")

# allow authentication via reverse proxy (e.g. authelia), leave off if you dont know what you are doing
# see docs for more information https://vabene1111.github.io/recipes/features/authentication/
# when unset: 0 (false)
REMOTE_USER_AUTH=0

# Default settings for spaces, apply per space and can be changed in the admin view
# SPACE_DEFAULT_MAX_RECIPES=0 # 0=unlimited recipes
# SPACE_DEFAULT_MAX_USERS=0 # 0=unlimited users per space
# SPACE_DEFAULT_MAX_FILES=0 # Maximum file storage for space in MB. 0 for unlimited, -1 to disable file upload.
# SPACE_DEFAULT_ALLOW_SHARING=1 # Allow users to share recipes with public links

# allow people to create accounts on your application instance (without an invite link)
# when unset: 0 (false)
ENABLE_SIGNUP=1

# If signup is enabled you might want to add a captcha to it to prevent spam
# HCAPTCHA_SITEKEY=
# HCAPTCHA_SECRET=

# if signup is enabled you might want to provide urls to data protection policies or terms and conditions
# TERMS_URL=
# PRIVACY_URL=
# IMPRINT_URL=

# enable serving of prometheus metrics under the /metrics path
# ATTENTION: view is not secured (as per the prometheus default way) so make sure to secure it
# trough your web server (or leave it open of you dont care if the stats are exposed)
# ENABLE_METRICS=0

# allows you to setup OAuth providers
# see docs for more information https://vabene1111.github.io/recipes/features/authentication/
SOCIAL_PROVIDERS=allauth.socialaccount.providers.openid_connect
SOCIALACCOUNT_PROVIDERS = { "openid_connect": { "SERVERS": [ { "id": "authelia", "name": "Authelia", "server_url": "https://XXXX/.well-known/openid-configuration", "token_auth_method": "client_secret_basic", "APP": { "client_id": "tandoor", "secret": "XXXXXX", }, } ] }}

# Should a newly created user from a social provider get assigned to the default space and given permission by default ?
# ATTENTION: This feature might be deprecated in favor of a space join and public viewing system in the future
# default 0 (false), when 1 (true) users will be assigned space and group
SOCIAL_DEFAULT_ACCESS = 1

# if SOCIAL_DEFAULT_ACCESS is used, which group should be added
SOCIAL_DEFAULT_GROUP = admin

# Django session cookie settings. Can be changed to allow a single django application to authenticate several applications
# when running under the same database
# SESSION_COOKIE_DOMAIN=.example.com
# SESSION_COOKIE_NAME=sessionid # use this only to not interfere with non unified django applications under the same top level domain

# by default SORT_TREE_BY_NAME is disabled this will store all Keywords and Food in the order they are created
# enabling this setting makes saving new keywords and foods very slow, which doesn't matter in most usecases.
# however, when doing large imports of recipes that will create new objects, can increase total run time by 10-15x
# Keywords and Food can be manually sorted by name in Admin
# This value can also be temporarily changed in Admin, it will revert the next time the application is started
# This will be fixed/changed in the future by changing the implementation or finding a better workaround for sorting
# SORT_TREE_BY_NAME=0

# LDAP authentication
# default 0 (false), when 1 (true) list of allowed users will be fetched from LDAP server
#LDAP_AUTH=
#AUTH_LDAP_SERVER_URI=
#AUTH_LDAP_BIND_DN=
#AUTH_LDAP_BIND_PASSWORD=
#AUTH_LDAP_USER_SEARCH_BASE_DN=

# Enables exporting PDF (see export docs)
# Disabled by default, uncomment to enable
ENABLE_PDF_EXPORT=1

Docker-Compose file

  # Tandoor - Recipes
  recipes_db:
    <<: *common-keys-apps # See EXTENSION FIELDS at the top
    image: postgres:11-alpine
    container_name: recipes_db
    volumes:
      - $APPDATADIR/postgresql:/var/lib/postgresql/data
    env_file: ./recipes.env
    environment:
      <<: *environment-tz-puid-pgid
#      POSTGRES_PASSWORD_FILE: /run/secrets/postgres_password
#    secrets:
#      - postgres_password
    networks:
      - recipes

  recipes_web:
    <<: *common-keys-apps-privileged # See EXTENSION FIELDS at the top
    image: vabene1111/recipes
    container_name: recipes_web
    volumes:
      - $APPDATADIR/recipes/staticfiles:/opt/recipes/staticfiles
      - $APPDATADIR/recipes/nginx_config:/opt/recipes/nginx/conf.d
      - $APPDATADIR/recipes/mediafiles:/opt/recipes/mediafiles
    env_file: ./recipes.env
    environment:
      <<: *environment-tz-puid-pgid
    depends_on:
      - recipes_db
    networks:
      - recipes

  recipes_nginx:
    <<: *common-keys-apps-privileged # See EXTENSION FIELDS at the top
    image: nginx:mainline-alpine
    container_name: recipes_nginx
    volumes:
      - $APPDATADIR/recipes/nginx_config:/etc/nginx/conf.d:ro
      - $APPDATADIR/recipes/staticfiles:/static
      - $APPDATADIR/recipes/mediafiles:/media
    env_file:
      - ./recipes.env
    environment:
      <<: *environment-tz-puid-pgid
    labels: # traefik example labels
      - "traefik.enable=true"
      ## HTTP Routers
      - "traefik.http.routers.recipes-rtr.entrypoints=https"
      - "traefik.http.routers.recipes-rtr.rule=Host(`recipes.$DOMAINNAME`)"
      ## Middlewares
      - "traefik.http.routers.recipes-rtr.middlewares=chain-authelia@file"
      ## HTTP Services
      - "traefik.http.routers.recipes-rtr.service=recipes-svc"
      - "traefik.http.services.recipes-svc.loadbalancer.server.port=80"
    depends_on:
      - recipes_web
    networks:
      - t2_proxy
      - recipes

Relevant logs

Log from database container:

2023-12-05 12:09:52.074 CET [534] ERROR:  could not identify an ordering operator for type oid at character 119

2023-12-05 12:09:52.074 CET [534] HINT:  Use an explicit ordering operator or modify the query.

2023-12-05 12:09:52.074 CET [534] STATEMENT:  SELECT classid, objid, refobjid FROM pg_depend WHERE refclassid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3
smilerz commented 9 months ago

can you try docker exec -it postgres pg_dump -d djangodb -Z6 --schema=public --blobs -U djangouser > ~/tandoor.sql

maxbrueckl commented 9 months ago

I did. The logs show the same entries like above. This is the resulting tandoor.sql:

pg_dump: [archiver (db)] query failed: ERROR:  could not identify an ordering operator for type oid
LINE 1: ...ssid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3
                                                                      ^
HINT:  Use an explicit ordering operator or modify the query.
pg_dump: [archiver (db)] query was: SELECT classid, objid, refobjid FROM pg_depend WHERE refclassid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3
smilerz commented 9 months ago

please run the original command adding -v

maxbrueckl commented 9 months ago

Container logs show the same entries as before. cat tandoor.sql:

pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false)
--
-- PostgreSQL database cluster dump
--

-- Started on 2023-12-05 15:37:10 CET

SET default_transaction_read_only = off;

SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;

pg_dumpall: executing SELECT oid, rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolconnlimit, rolpassword, rolvaliduntil, rolreplication, rolbypassrls, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment, rolname = current_user AS is_current_user FROM pg_authid WHERE rolname !~ '^pg_' ORDER BY 2
--
-- Roles
--

pg_dumpall: executing SELECT provider, label FROM pg_catalog.pg_shseclabel WHERE classoid = 'pg_catalog.pg_authid'::pg_catalog.regclass AND objoid = '10'
CREATE ROLE djangouser;
ALTER ROLE djangouser WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS PASSWORD 'XXXXX';
pg_dumpall: executing SELECT setconfig[1] FROM pg_db_role_setting WHERE setdatabase = 0 AND setrole = (SELECT oid FROM pg_authid WHERE rolname = 'djangouser')

pg_dumpall: executing SELECT ur.rolname AS roleid, um.rolname AS member, a.admin_option, ug.rolname AS grantor FROM pg_auth_members a LEFT JOIN pg_authid ur on ur.oid = a.roleid LEFT JOIN pg_authid um on um.oid = a.member LEFT JOIN pg_authid ug on ug.oid = a.grantor WHERE NOT (ur.rolname ~ '^pg_' AND um.rolname ~ '^pg_')ORDER BY 1,2,3

pg_dumpall: executing SELECT oid, spcname, pg_catalog.pg_get_userbyid(spcowner) AS spcowner, pg_catalog.pg_tablespace_location(oid), (SELECT array_agg(acl ORDER BY row_n) FROM   (SELECT acl, row_n FROM      unnest(coalesce(spcacl,acldefault('t',spcowner)))      WITH ORDINALITY AS perm(acl,row_n)    WHERE NOT EXISTS (      SELECT 1      FROM unnest(acldefault('t',spcowner))        AS init(init_acl)      WHERE acl = init_acl)) AS spcacls)  AS spcacl, (SELECT array_agg(acl ORDER BY row_n) FROM   (SELECT acl, row_n FROM      unnest(acldefault('t',spcowner))      WITH ORDINALITY AS initp(acl,row_n)    WHERE NOT EXISTS (      SELECT 1      FROM unnest(coalesce(spcacl,acldefault('t',spcowner)))        AS permp(orig_acl)      WHERE acl = orig_acl)) AS rspcacls)  AS rspcacl, array_to_string(spcoptions, ', '),pg_catalog.shobj_description(oid, 'pg_tablespace') FROM pg_catalog.pg_tablespace WHERE spcname !~ '^pg_' ORDER BY 1

pg_dumpall: executing SELECT datname FROM pg_database d WHERE datallowconn AND datconnlimit != -2 ORDER BY (datname <> 'template1'), datname
pg_dumpall: dumping database "template1"...
\connect template1

pg_dumpall: running ""/usr/local/bin/pg_dump"  -v  -Fp 'user=djangouser dbname=template1'"
pg_dump: last built-in OID is 16383
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined access methods
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined operator families
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined foreign-data wrappers
pg_dump: reading user-defined foreign servers
pg_dump: reading default privileges
pg_dump: reading user-defined collations
pg_dump: reading user-defined conversions
pg_dump: reading type casts
pg_dump: reading transforms
pg_dump: reading table inheritance information
pg_dump: reading event triggers
pg_dump: finding extension tables
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: flagging inherited columns in subtables
pg_dump: reading partitioning datapg_dump: reading indexes
pg_dump: flagging indexes in partitioned tables
pg_dump: reading extended statistics
pg_dump: reading constraints
pg_dump: reading triggers
pg_dump: reading rewrite rules
pg_dump: reading policies
pg_dump: reading row-level security policies
pg_dump: reading publications
pg_dump: reading publication membership
pg_dump: reading subscriptions
pg_dump: reading large objects
pg_dump: reading dependency data
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
pg_dump: saving search_path = 
pg_dump: implied data-only restore
--
-- PostgreSQL database dump
--

-- Dumped from database version 11.22
-- Dumped by pg_dump version 11.22

-- Started on 2023-12-05 15:37:10 CET

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

-- Completed on 2023-12-05 15:37:10 CET

--
-- PostgreSQL database dump complete
--

pg_dumpall: dumping database "djangodb"...
pg_dumpall: running ""/usr/local/bin/pg_dump"  -v --create -Fp 'user=djangouser dbname=djangodb'"
pg_dump: last built-in OID is 16383
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: [archiver (db)] query failed: ERROR:  could not identify an ordering operator for type oid
LINE 1: ...ssid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3
                                                                      ^
HINT:  Use an explicit ordering operator or modify the query.
pg_dump: [archiver (db)] query was: SELECT classid, objid, refobjid FROM pg_depend WHERE refclassid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3
pg_dumpall: pg_dump failed on database "djangodb", exiting
smilerz commented 9 months ago

Connect to postgres using docker exec -it recipes_db psql -U djangouser -d djangodb

then run

reindex system djangodb
reindex database djangodb

and try again if you still get an error connect again and share the output from SELECT classid, objid, refobjid FROM pg_depend

maxbrueckl commented 9 months ago

The output of dumpall after running the two reindex commands is still the same.

The mentioned SELECT query seems to not yield any results:

$ docker exec -it recipes_db psql -U djangouser -d djangodb
psql (11.22)
Type "help" for help.

djangodb=# SELECT classid, objid, refobjid FROM pg_depend
djangodb-#

Neither does SELECT classid, objid, refobjid FROM pg_depend WHERE refclassid = 'pg_extension'::regclass AND deptype = 'e' ORDER BY 3.

smilerz commented 9 months ago

If you join discord I can try to help you further - it's going to take some trial and error.

Lipown commented 9 months ago

Hello I would appreciate help also since I am total noob. I just notice after a month that my session of Tandoor is not working anymore probably due to this :/

scepbjoern commented 9 months ago

I had not the same problem, but after following https://docs.tandoor.dev/faq/#how-can-i-upgrade-postgres-major-versions in combination with https://kopfkrieg.dev/2022/05/06/postgres-datenbank-im-docker-container-upgraden/, because I use portainer and not docker-compose, I can startup all containers and I can login to the app, but when I click on a recipe I only get a 500-HTTP-Error and in the logs of the db container are some messages like

2023-12-10 17:14:21.862 UTC [652] ERROR:  column cookbook_propertytype.fdc_id does not exist at character 252
2023-12-10 17:14:21.862 UTC [652] STATEMENT:  SELECT "cookbook_propertytype"."id", "cookbook_propertytype"."name", "cookbook_propertytype"."unit", "cookbook_propertytype"."order", "cookbook_propertytype"."description", "cookbook_propertytype"."category", "cookbook_propertytype"."open_data_slug", "cookbook_propertytype"."fdc_id", "cookbook_propertytype"."space_id" FROM "cookbook_propertytype" WHERE ("cookbook_propertytype"."space_id" = 1 AND "cookbook_propertytype"."space_id" = 1) ORDER BY "cookbook_propertytype"."order" ASC

Any ideas?

smilerz commented 9 months ago

Delete the contents of the database and try to import again. Also, make sure you are running the same version of tandoor as the exported version.

emresaglam commented 9 months ago

I'm having the same/similar problem after Postgres 11 upgrade. Web pages load, but when I click a recipe I get a similar error django.db.utils.ProgrammingError: column cookbook_propertytype.fdc_id does not exist

When I add a new column with ALTER TABLE cookbook_propertytype ADD fdc_id int; it starts working again.

I don't know what's stored in the fdc_id column and I don't know if it will incapacitate the tool later down the road. But we will see :)

EDIT: OK More fallout: Looks like I needed to add more columns: ALTER TABLE cookbook_mealplan ADD from_date date; ALTER TABLE cookbook_mealplan ADD to_date date;

After adding these three columns to two tables, things started to play along nice. No more errors on the logs.

EDIT 2: (At this point I'm reversing the upgrade and doing QA, I think) :D Since the from/to dates are added I think date column needed to be dropped in order to Meal Plan work. One more thing to do is:

ALTER TABLE cookbook_mealplan DROP COLUMN date;

scepbjoern commented 9 months ago

Delete the contents of the database and try to import again. Also, make sure you are running the same version of tandoor as the exported version.

@smilerz Thanks for your suggestions. When I understand you right, you suggest that I make a pgdump of the postgres 11 db, which was used by Tandoor 1.5.6. Then I shall make an update to Postgress 12, import with psql and start still the Tandoor 1.5.6. And only after that I shall update also tandoor to the latest version. Correct? I assumed, that Tandoor 1.5.6 can not handle postgres 12 db.

scepbjoern commented 9 months ago

I'm having the same/similar problem after Postgres 11 upgrade. Web pages load, but when I click a recipe I get a similar error django.db.utils.ProgrammingError: column cookbook_propertytype.fdc_id does not exist

When I add a new column with ALTER TABLE cookbook_propertytype ADD fdc_id int; it starts working again.

I don't know what's stored in the fdc_id column and I don't know if it will incapacitate the tool later down the road. But we will see :)

EDIT: OK More fallout: Looks like I needed to add more columns: ALTER TABLE cookbook_mealplan ADD from_date date; ALTER TABLE cookbook_mealplan ADD to_date date;

After adding these three columns to two tables, things started to play along nice. No more errors on the logs.

EDIT 2: (At this point I'm reversing the upgrade and doing QA, I think) :D Since the from/to dates are added I think date column needed to be dropped in order to Meal Plan work. One more thing to do is:

ALTER TABLE cookbook_mealplan DROP COLUMN date;

@emresaglam: Thank you for sharing your progress. I hope, that there is a less "hacky" variant of updating.

smilerz commented 9 months ago

@smilerz Thanks for your suggestions. When I understand you right, you suggest that I make a pgdump of the postgres 11 db, which was used by Tandoor 1.5.6. Then I shall make an update to Postgress 12, import with psql and start still the Tandoor 1.5.6. And only after that I shall update also tandoor to the latest version. Correct? I assumed, that Tandoor 1.5.6 can not handle postgres 12 db.

Since you are having issues I'm suggesting doing one change at a time so that we can isolate what's going wrong.

I'd also suggest you upgrade to PSQL16 eventually as 12 will stop working roughly a year from now.

scepbjoern commented 9 months ago

Thanks, I will try it this evening and will report, what worked.

smilerz commented 9 months ago

Hello I would appreciate help also since I am total noob. I just notice after a

manually adding columns is always going to lead to trouble - can you start from fresh restore and share the logs from tandoor at first start?

scepbjoern commented 9 months ago

I tried the following procedure without luck, as I can now not login anymore in the web app (the application always changes to the setup screen, so it seems it doesn't see the data in the database):

  1. Made the dump with pg_dumpall to a file called dump.sql and moved this file to my host
  2. Stopped all containers
  3. Updated to version 12 and only started the db container
  4. Restored the db from dump.sql first with "docker exec tandoor-db psql -U djangouser postgres < ./dump.sql" (after step 5 also with docker exec tandoor-db psql -U djangouser djangodb < ./dump.sql", as my database name is djangodb) -> BTW: tandoor_db is the name of the db container
  5. This command worked and when I bash in the container with docker exec -it tandoor-db bash and then list all the databases with \l it shows four databases (postgres, djangodb, template0 and template1)
  6. As for no success until step 5 I then tried

Any ideas? Would it be possible to have a video call on discord in the beginning of January?

smilerz commented 9 months ago

Please follow the directions and syntax outlined here.

https://docs.tandoor.dev/system/updating/

sebimarkgraf commented 8 months ago

@scepbjoern Do you get any problems when starting the container? For me the migration 200 was failing due to a tag with duplicate name. This lead to this exact error as following migrations are not executed when one migration is failing. But, the system started still without the applied migrations.