Open jasononeil opened 9 years ago
// So what if we need to do a migration that involves transforming data?
// Other migration systems you just write it in code, but it makes it difficult to "run down".
// Because, if you're on an old version of the code base, you don't have the migration to know what code to run when you're undoing the migration.
Storing migration scripts/tasks in the database is clever, yet wouldn't be flexible and powerful enough i think.
An idea i had is, each migration that is run will store it's id
in a schema_migrations
table.
Before running any actions, check the ids in schema_migrations
and compare against the Migration classes found in the codebase.
If there are any new Migration classes that the schema_migrations
does not have, then it's safe to migrate up.
If there are any schema_migrations
tracked that the code base does not have (due to rolling the codebase back a few commits), then like you mentioned, we cannot migrate down since those migration classes do not exist in this past version. So instead, you cannot migrate up or down, and would be advised to:
a) Checkout the commit that matches the latest tracked migration (probably master), then migrate down to the version you need(find a way to make this manageable, then (re)checkout the past commit.
or
b) haxelib run ufront migrate -resync
which will wipe the db, then migrate up to latest
That's a much better option. Just let them know which migrations need to be rolled back, and they can figure it out in version control.
Now to just code it.... ;)
The Aim
The Idea So Far
(Feel free to comment / edit...)
So in effect, the workflow would be like this:
MigrationAPI.runTempMigrations()
while you're working on it.runMigrations
to get the DB in line...Some questions:
haxelib run ufront migrate
) or does it run through a UFTasks file that is compiled as part of the project. The tasks file makes sense because in compile time you'd have access to the models. It might be more bootstrapping required to get an initial setup though...