jschaf / pggen

Generate type-safe Go for any Postgres query. If Postgres can run the query, pggen can generate code for it.
MIT License
282 stars 26 forks source link

Support for migration tools #17

Closed sadmansakib closed 3 years ago

sadmansakib commented 3 years ago

Is it possible to enable support for migration tools. I tried to use pggen with golang migrate. Pggen didn't ignore down sql files and codegen failed.

jschaf commented 3 years ago

I'm wary of adding tool specific knowledge to pggen. However, you might be able to use --schema-glob to only pickup the forward migration files. What do the files look like?

sadmansakib commented 3 years ago

i have this small project where i have multiple migration files generated by golang-migrate tool. for example, i have a migration for customer schema and golang-migrate generated two files for me first one is 000001_customer.up.sql and second one is 000001_customer.down.sql. Here, up file contains forward migrations and down file is for backward migrations. Also, I used --schema-glob to pass the entire schema migration folder.

jschaf commented 3 years ago

Gotcha, I think pggen can work for you with something like:

pggen \
    --schema-glob first-schemas.sql \
    --schema-glob 'migrations/*.up.sql'

The schema files are applied in order of the --schema-glob flag. For a glob that matches multiple files, the files are sorted in lexicographical order. So the above pggen command runs:

  1. first-schemas.sql since it appeared first on the command line
  2. migrations/001_customer.up.sql - first sorted entry in second glob flag
  3. migrations/002_customer.up.sql - second sorted entry in second glob flag
sadmansakib commented 3 years ago

gotcha. I'll give it a shot

sadmansakib commented 3 years ago

update: It sort of works. My backward migrations are ignored but after my code generation i couldn't migrate down since code generation doesn't depend on migration tools. I have to manually drop the tables from the database

jschaf commented 3 years ago

I don't follow why you'd need to run backward migrations for pggen. Can you post an example of the schema and migrations you're applying?

The other escape hatch is that you can point pggen at an existing database that you create however you want. The --schema-glob flag is only to make it easy to use pggen without managing a temporary Postgres database. Here's how you'd use --postgres-connection.

pggen gen go \
    --query-glob author/query.sql \
    --postgres-connection "user=postgres port=5555 dbname=pggen"
sadmansakib commented 3 years ago

this is the complete command i am using: pggen gen go \ --schema-glob='migrations/schema/*.up.sql' \ --query-glob='migrations/query/*sql' \ --postgres-connection "user=ffff port=5432 dbname=test" \ --output-dir='internal/db'

In my use case i have customer schema which has gender field which is an enum type. on first run pggen works fine but if i want to generate more codes than i get this error: connect postgres: load schema file into Postgres: ERROR: type "gender" already exists (SQLSTATE 42710)

So, i was trying to migrate down before generating more codes so that postgres stop complaining

jschaf commented 3 years ago

Ok, my understanding of the use-case.

It sounds like you're using the same database for migration 001 and 002. So pggen already built up the schema with a gender field. Which means when you run 002, you get the postgres error about gender already exists because pggen already applied that schema for 001.

Is that description accurate? I'm half guessing at what you're trying to do. If that's not accurate, please provide a small, reproducible example with the output you expect.


First the bad news: I'm not going to add support for any way of running pggen that requires state tracking between different runs of pggen. pggen is a one-shot tool, meaning you run it every time you want to update something, similar to a compiler. Compilers only depend on the input files at the time the compiler runs. The input to pggen is a Postgres database and query files.

Now the good news. You can build whatever orchestration you want. If I understand your use-case, you could do something like.

sadmansakib commented 3 years ago

On my use case 001_customer.up.sql is generating customer schema. Then i am writing queries for customer which is generating files. Afterwards, if i want to add additional migration files with their own queries then i end up having error.

I'll move to manual migration just as you said. I sounds like a good option for my use case

jschaf commented 3 years ago

I'll move to manual migration just as you said. I sounds like a good option for my use case

Awesome! Good luck. I'll close as it looks like you're set.