Closed dickermoshe closed 3 weeks ago
Did you mean to link to this snippet? The tests you've linked to aren't related to migrations.
Inferring migrations from a schema diff is not trivial. There are good tools which can help here (like e.g. Alembic from SQLAlchemy), but these tools all assume that they will generate code that users read carefully and modify if needed. There's also a question of how much value that provides - with the current APIs, we're at a state where migrations are relatively easy to write (thanks to Migrator.alterTable
and other convenience methods), structure (step-by-step migrations), tests (validating schema by default + with existing data if necessary).
Making a suggestion for what migrations could look like is definitely helpful, but it would have to be some new format (users are fundamentally expected to own t hat generated code and make modifications to it, so it can't be something that the next CLI command simply overrides). I don't think there's a way to automate testing with pre-existing data. Adding a simple test that loops over all schema variations to make sure that the schema transformation without data checks out is easy, but it's also easy for users to do and there's no need for a CLI tool to generate this. We could add this snippet to the documentation though:
@simolus3 I forgot to change the name of the Issue, I'm only referring to tests.
Right, but could you clarify which ones?
Generating snippets which look like this:
https://drift.simonbinder.eu/docs/migrations/tests/#verifying-data-integrity
I thought originally that maybe we could use sqldef
, but it doesn't support renames and complex alter column.
It's really neat though, it can do this:
step1.sql: ```sql CREATE TABLE `user` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(191) DEFAULT 'k0kubun', PRIMARY KEY (`id`) ) ``` step1.sql: ```sql CREATE TABLE `user` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(191) DEFAULT 'k0kubun', PRIMARY KEY (`id`) ) ``` step2.sql ```sql CREATE TABLE `user` ( `id` varchar(191) NOT NULL, `name` varchar(191) DEFAULT 'k0kubun', `email` varchar(191) NOT NULL, PRIMARY KEY (`id`) ) ``` ```console PS C:\Users\dicke\OneDrive\Desktop> .\sqlite3def.exe /dry-run /f .\step1.sql .\step2.sql -- dry run -- ALTER TABLE `user` ADD COLUMN `email` varchar(191) NOT NULL; ```
@simolus3 What do you think about a CLI command which will wrap up dump
, generate
and steps
and use folders in pubspec.yaml/build.yaml:
drift:
# Where schema.json files are stored
schema_dir: drift/drift_schemas # default
# Where generated tests are placed
test_dir: test/drift # default
# Databases
databases:
main_db: lib/db/db.dart
This would allow the following to be generated with just a simple command
dart run drift_dev makemigrations
├── drift_schemas
│ └── main_db # Uses the database name from above
│ ├── drift_schema_v1.json
│ └── drift_schema_v2.json
├── lib
│ └── db
│ ├── db.dart # This is the file which contains the `main_db` database
│ ├── db.g.dart
│ └── db.steps.dart # This is generated automatically and should not be modified
└── test
└── drift
└── main_db
├── schema
│ ├── schema.dart
│ ├── schema_v1.dart
│ └── schema_v2.dart
└── generated_test.dart # Runs a basic test
We can introduce these as builder options that are ignored by the builder and only picked up by the CLI:
targets:
$default:
builders:
drift_dev:
# This is where the options would go
schema_dir: ...
There's existing logic in drift_dev
to extract the drift configuration from those files when running outside of builds, so that should work. It's not as neat as a top-level key, but I'm not a fan of tools that insist adding keys to pubspec since that's not the purpose of that file.
Then that's where it'll go.
We can also have a before_after.dart
file, which users will fill with before and after companions to test data integrity
final beforeAfterTest= beforeAfter(
before1to2: Before1to2(
todos: [
v1.Todo(...)
]
),
after1to2: After1to2(
todos: [
v2.Todo(...)
]
)
);
What do you about this concept?
Definitely an interesting idea, it would make data integrity tests much easier to write. I think instead of Before
and After
we could just name them DataAtV2
for simplicity, but the general idea is great.
Testing Migrations
Tests like these should be generated automatically.