Closed whisperity closed 3 months ago
I noticed that instead of utilizing SQLAlchemy's built in methods, raw SQL queries were executed directly. While this approach can work, it may introduce potential risks (for example when table or column name is changed).
Would it be possible to refactor the queries?
I noticed that instead of utilizing SQLAlchemy's built in methods, raw SQL queries were executed directly.
Would it be possible to refactor the queries?
That is very common during migration: [1], [2], [3]. When this script is executing, the schema in the database is not available in an ORM-y way, and there is a very high chance that the schema in the database "right now" does not match what the db model definitions in the Python package contain.
automap(reflect
allows for some very limited functionality to figure out some things and go back to relying on SQLAlchemy to generate queries.
While this approach can work, it may introduce potential risks (for example when table or column name is changed).
Not even automap
solves this. You have to rely on the exact table and column name, hence the
Report = Base.classes.reports # 'reports' is the table name!
Checker = Base.classes.checkers
in the earlier code.
I can try changing this specific query to use automap
here, as the Checkers
table (or its contents) is not mutated in this function.
The first row in a table with an
AUTOINCREMENT
ID has the value 1, not 0. This was synchronised in the migration script to haveserver_default="1"
such that all default constructedreport
rows have a validFOREIGN KEY
to thecheckers
table (the__FAKE__
checker). But the hand-written upgrade query filtered for0
to get a batch of "unhandled rows", essentially preventing appropriate migration.