Open srossross opened 1 year ago
Hey @srossross I've made a PoC with Atlas + Supabase which uses the auth.user_id foreign key in the public schema:
https://github.com/fenos/supabase-atlas-migrations/tree/main
The trick is to instrospect the auth
schema with atlas, which will generate the HCL
Then run atlas migrate diff
to generate the sql migration
When you are ready to apply pass --baseline=<version>
flag to the atlas migrate
command with the version equal to the just generated sql
I have faced exactly the same issue and I guess it exist in any ORM/migration management software. Initially I was trying to use Prisma to manage my DB schema, but as my public
.profiles
table referenced the auth.users.id
field I was getting the same error.
The problem here, as I see it, is that Atlas (and any other tool) assumes that it has full control over the database and over all the tables/schemes. So such tools relay on the assumption that all schemes/tables are specified in the definition file and other parts of the DB is empty. It seems there is no way to define 'stale' reference in the definition file, because it's assumed that this file defines everything in the database.
And the other problem is how to represent these 'dummy' references in the "shadow/dev database" (which is created purely from schema.hcl
and this "external" referenced field (or trigger/function/etc.) indeed does not exist
One of the possible solutions I can imagine (except adding all supabase's tables like realtime
, storage
, auth
to schema.hcl) is to add special compiler/executor pragma that will exclude the 'stale' references from the DB-to-Schema.hcl difference calculations and will add them directly to the migration files.
(but this approach also is not perfect as it seems that such references will be added to every migration sql as it's not tracked in shadow/dev database) ðŸ«
Ou, one of the interesting solutions that just came to my mind is to provide CLI/Configuration option to specify the SQL file with the baseline for shadow database. So it will be initiated not empty (totally from scratch), but will already have some definitions and will sync with the existing schema in real DB. Moreover, this will solve the issue of creating 'dangling' references, as the target of such references will exist in Dev DB and the SQL will be generated as always
Any ideas on the proposal?
I've been trying to figure this out as well. The baseline
functionality in the dev docker image doesn't work for this with Supabase's image, because while that keeps it from stepping on the Supabase-provisioned tables, it means that it's then impossible to reference those tables.
I'm wondering if a short term solution here could be to add a sql()
escape hatch like there is for types, like:
trigger "do_a_thing_on_preexisting_table" {
on = sql("\"fully_qualified\".\"table_name\"")
...
}
Alternately, there could be something to tell the engine to skip trying to create certain things, but otherwise treat them as valid for object reference purposes:
schema "storage" {}
table "objects" {
schema = schema.storage
is_placeholder = true
}
I am currently using Supabase and have a column with a foreign key constraint on the
auth.users
table. Here is an example of the foreign key definition in Atlas:However, I do not want to manage the
auth
schema in Atlas, as it is handled separately by Supabase. Is there a way to reference theauth.users
table without managing its schema in Atlas?Expected Behavior: I would like to maintain the foreign key relationship with the
auth.users
table without having to manage its schema in Atlas. Ideally, the foreign key constraint should still be enforced and any changes made to theauth.users
table should reflect in the referencing table.