Open WillSquire opened 5 years ago
You can use the diesel-migrations
crate and do that programmatically.
Here's what I'm doing.
mod all {
use crate::db::Connection;
embed_migrations!("migrations");
pub fn run(connection: &Connection) {
embedded_migrations::run_with_output(connection, &mut std::io::stdout());
}
pub mod prod {
use crate::db::Connection;
embed_migrations!("migrations/.seed");
pub fn run(connection: &Connection) {
embedded_migrations::run_with_output(connection, &mut std::io::stdout());
}
}
pub mod dev {
use crate::db::Connection;
embed_migrations!("migrations/.seed.dev");
pub fn run(connection: &Connection) {
embedded_migrations::run_with_output(connection, &mut std::io::stdout());
}
}
pub mod test {
use crate::db::Connection;
embed_migrations!("migrations/.seed.test");
pub fn run(connection: &Connection) {
embedded_migrations::run_with_output(connection, &mut std::io::stdout());
}
}
}
pub fn migrate(connection: PooledConnection) {
all::run(&connection);
match Environment::default() {
Environment::Development => all::dev::run(&connection),
Environment::Test => all::test::run(&connection),
Environment::Production => all::prod::run(&connection),
}
}
This feels like it could be a really useful addition to diesel-cli, I've used something similar in Knex.js. I think if we were to add something like that it would be cool if we also supported the programmatic approach in addition to .sql
files.
Imagine you already have helper functions that can create a bunch of rows very easily.
@vladinator1000 Due to time constraints I do not have any plans to work on that in near future by myself, but I'm open to someone doing the design and implementation work here.
Similarly to @rafaelGuerreiro I use a temporary solution using multiple migration folders, one for the schema migration and one for the seeds. To make it usable when developing against a local database I find it useful to use a makefile:
.ONESHELL:
seed: migrate
diesel migration --migration-dir seeds run
seed_make:
read -p 'Name of the seed: ' name
full_name=seeds/"$$(date -d '3000 years' '+%F-%H%M%S')"_"$$name"
mkdir -p "$$full_name"
touch "$$full_name"/up.sql
touch "$$full_name"/down.sql
seed_reset:
while :; do
diesel migration --migration-dir seeds revert || break
done
migrate:
diesel migration run
migrate_make:
read -p 'Name of the migration: ' name
diesel migration generate "$$name"
reset: seed_reset
while :; do
diesel migration revert || break
done
To make it revert properly, I set the date for the seed files to be 3000 years in the future, so that they're always considered "the latest" when a rollback occurs.
It seems adaptable enough if you want to have separate seeds for dev and for testing.
I'm using
embed_migrations!()
andembedded_migrations::run()
to run pending migrations on startup, but I've hit a wall that's tricky to solve. In a nutshell I'd like to seed the database with test data if it's currently running a test. The only way I can think of doing this is to leverage the existing migrations functionality, but I'm not quite sure how to do this.Is this possible?
Ideally I'd prefer not to use migrations because there's a password to hash, but that might need to be phase 2 as it is likely going to be more work. Although thoughts are welcome :) .