Closed cfloressuazo closed 1 month ago
hi -
Alembic is a tool that emits SQL commands to modify structures in a database and keep track of this within a versioning table. It emits these commands correctly, and the error you have has to do with your distributed database having other traffic coming to other nodes while this goes on. This is in the scope of managing that database, setting appropriate server/client config settings or ensuring that when you send commands to it, those commands are isolated from other changes. All fully outside the scope of what Alembic could ever do on its own, not to mention we dont have support for the yugabyte database. I would seek help from yugabyte developers.
Hi,
You may try to see if if setting the flag https://alembic.sqlalchemy.org/en/latest/api/runtime.html#alembic.runtime.environment.EnvironmentContext.configure.params.transaction_per_migration helps in your case
Describe the bug When running multiple migrations (more than 1 in a new database) in a single command with YugabyteDB using
flask db upgrade
oralembic upgrade
, the operation fails. The migration script runs and creates all the tables, but the last step where it has to update the version fails with a SerializationFailure error.Expected behaviour The flask db upgrade command should successfully apply all migrations and update the alembic_version table with the correct version number.
To Reproduce The issue can be reproduced by running multiple migrations in a single command with YugabyteDB. Here's a simplified example:
flask db upgrade
oralembic upgrade
Revision ID: eb6e925edca4 Revises: Create Date: 2023-11-28 14:59:44.514650
""" from alembic import op import sqlalchemy as sa
revision identifiers, used by Alembic.
revision = 'eb6e925edca4' down_revision = None branch_labels = None depends_on = None
def upgrade():
commands auto generated by Alembic - please adjust!
def downgrade():
commands auto generated by Alembic - please adjust!
Error
Versions.
Additional context The operation to update the alembic_version table takes less than 2 seconds, so it's unlikely that the issue is caused by a timeout. The issue persists even when using a fresh new database with no other connections.
Have a nice day!