Open deankarn opened 2 years ago
While I don't think it would be that hard to add something like this, I fear that you're setting up to shoot yourself in the foot doing it this way.
If multiple applications are looking at the same database, you really want a single source of truth for migrations so you don't end up in a situation where you have migrations clobbering each others' work because they touch the same tables but were defined in different parts of the code.
What you could do is have a separate crate specifically to hold and embed the migrations, and then invoke this crate from your applications:
migrations/src/lib.rs
:
pub async fn run(pool: &PgPool) -> sqlx::Result<()> {
sqlx::migrate!().run(pool).await
}
app/src/main.rs
:
async fn main() -> anyhow::Result<()> {
let pool = PgPool::connect("<DATABASE URL>").await?;
migrations::run(&pool).await?;
// ...
}
If you want applications to only run migrations that they actually need, then I think we're talking more about having a graph-based resolver for migrations where you can express more complex "depends-on" relationships than you can with the current linear resolver. That sounds really cool to implement but I think it still risks running into the same issue of potential conflicts between sibling migrations.
@abonander I agree that the norm is to have a single source of truth and crate/repository/code that centrally controls your company's schema, but this is not the only use case.
My current use case is I'm building a set of products that companies can use. Each is an individual product/application that I want to make as easy as possible by making it a self contained binary containing the migrations for the specific product/application.
There are several applications I want to do this for that all have their own migrations, however, they are as I said independent and don't have to be deployed together or even at all together. Each individual application also doesn't need a dedicated database due to low throughput/io of some of the applications.
I'm hoping this can be supported otherwise the only other option right now is creating a new database for each application which is both heavy, costly and should be unnecessary.
I'm hoping this can be supported otherwise the only other option right now is creating a new database for each application which is both heavy, costly
Really? It shouldn't be. What database and host are you using?
Yes I know you can create different schema namespaces and DB's in aurora, but it would be nice to not have to distribute the tables all over the place.
Feel free to close if you want there are options, just sub-optimal.
As someone who has started using sqlx for a monorepo that contains multiple services, I would also like to use the migrate!
macro to apply migrations within a single postgresDB across segregated schemas.
At the moment, I've patched sqlx-core
to accept a schema
in the migrate!
macro which allows me to run the migrate commands against silo'd schemas. I've only patched postgres, and I think it would take a bit more work to make this reasonable for providers like Sqlite.
While I agree with @abonander in spirit, I do think offering flexibility to allow someone to shoot themselves in the foot is warranted. In particular, if I want to impose a particular segregation scheme, then it's up to me to not screw it up by practicing good hygiene in my DB.
Let me know if you believe providing an optional segregation param to migrate!
works as an API, and I'm happy to put together a more carefully considered PR for all the existing providers. In this case, the api would be something like:
migrate!("path/to/migrations", "optional_scheme_prefix")
If multiple applications are looking at the same database, you really want a single source of truth for migrations so you don't end up in a situation where you have migrations clobbering each others' work because they touch the same tables but were defined in different parts of the code.
Caveat emptor -- it's entirely possible to create a migration set for updating database roles and permissions within different schemas to make sure said clobbering of work doesn't happen in the first place. Then all the apps/clients you wish to co-host in the same database are migrated with / accessed by different security roles. In a sophisticated database setup the super admin user isn't used for migrations nor access by services.
I have the same issue.
Has anyone considered using the migrations table as the source of truth? This would require the migrations table to store the sql for each migration and have some sort of sync'ing logic within the CLI/migrator to keep each applications migration files up to date with what is stored in the database.
Before any migration operation is ran, the CLI/migrator would ensure local migration state matches the state of migrations table.
I'm just thinking aloud here, but it seems reasonable for my situation. Maybe it's the dawn of the pseudo-monolith where db state is shared between projects/services and API's aren't always required for them to communicate.
Also, please feel free to tell me if this is a horrible idea and I've lost my mind.
While I was creating a few separate applications that used migrations I ran into this issue:
After looking into it it appears that there isn't support for multiple applications to run migrations on the same database. I noticed that there were a couple other issues reporting similar issues:
Although either of these might be able to resolve the issue the first still requires migrations to be run within the same application and the second starts spreading around where migrations live.
I would like to propose that
sqlx::migrate!()
gain an additional parameter that can specify a uniquekey/application name/id
that can be added as a column and used in the_sqlx_migrations
table in the database to separate migrations from one another. This would support multiple application migrations support while keeping the migration logic in a single place.Thank you in advance for the consideration.