Closed captainvera closed 1 year ago
If I follow, the error is occurring while trying to autogenerated a new migration, not during deployment, correct?
Materialized views allows arbitrary SQL in their definitions, which is different from the entities alembic manages. To account for that, the diffing during autogenerate uses a separate workflow. When you create a materialized view in postgres it stores the underlying query. That query is parsed and reformatted so your local text blob in your python project no longer matches the text that is stored in postgres. That makes it difficult to determine if the definition has changed during --autogenerate
The way alembic_utils checks to see if the definition of a materialized view has changed is to:
The error your seeing is occurring at step 2 while alembic_utils is figuring out which (if any) of your local database entities have changed.
A solution to get you unblocked locally would be to manually execute
alter materialized view <mat view schema>.<mat view name> owner to <local migration role name>
to re-align the materialized view's owner to the role that produces the autogenerated migrations, but that shouldn't be necessary
Are you aware of any reason why the role name that is locally producing the migrations would differ from the role that applied the migrations to that local instance?
For example, spinning up your local development database from a dump of production where the role names are different would cause this
Thank you for the explanation!
From your questions at the end I understand what I am doing wrong. You see, I am not connecting to a local development database, but running the alembic revision
command while connecting directly to a hosted staging DB.
Because I was not aware of any sort of transaction actually being made I believed it to be a no-impact process and as such not needing a local dev DB (not best practices but hey :) ).
In the end what is happening is exactly what you described, the role that applies the migrations in staging is setup similarly to the one i mentioned on the original issue with a k8s job, so the role that applies the migration (remotely) is indeed different than the one that creates the migrations locally.
Guess I'll have to setup some best practices! Thanks again!
Hey there, First of all, thanks for this great package!
I am managing a simple DB with 1 table and a materialized view on that table.
It is a managed Pgsql DB in aws and I have access to it through StrongDM. In our production environment we have a user whose credentials get shared directly with the kubernetes pods that need access to this DB.
My usual workflow is creating the alembic migration locally with
alembic revision -m "..." --autogenerate"
, commit this to the remote repo and on deployment a k8s job runs and doesalembic upgrade head
.This worked great with everything related to the table and for creating the materialized view.
However, after the creation of the materialized view, every time I try to make a new migration (unrelated to the materialized view) I get the following error:
It seems that I need to be the owner of the materialized view? But I am making no changes on it.
Two other interesting things also happened:
register_entities
in alembic'senv.py
and it will allow me to run the migration without dropping the view.register_entities
and add a new one (instead of an empty list) it will actually drop the old one and create the new one.The current workaround I am using to continue doing migrations is 1. but it is a weird hack and I am sure this is not intended behaviour. I also see no reason that alembic_utils would need me to be the owner of the mat view when doing no changes to it and running an unrelated migration, especially when I can do it fine for all other entities.
Thanks in advance, Miguel