Open eldad87 opened 5 years ago
I never planned to add support for this, to keep things simple. Embedding .go
migrations means pulling in the whole compiler and will make both the CLI tool as the embedded library much more complex.
I'd be open for a pull request that adds this, but it needs to be clean and not make the footprint of the library many times larger.
https://github.com/pressly/goose supports doing this by making your own binary that loads your go migrations.
Yup, Goose does this, so you might one to check that one.
Sql-migrate was built mainly for use as a library, to embed migrations into your self-contained executable and not to require anything else. Therefore it has different trade-offs.
tried patching goose at first, but turning it into a library would've meant such a big rewrite that a clean slate was more productive (for one, the error handling in goose was horrendous, though that might have been fixed by now).
Use what works best for you!
On Mon, Feb 18, 2019, 18:07 Emil Nikolov <notifications@github.com wrote:
https://github.com/pressly/goose supports doing this by making your own binary that loads your go migrations.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/rubenv/sql-migrate/issues/129#issuecomment-464812699, or mute the thread https://github.com/notifications/unsubscribe-auth/AACnmBtarfRVZ4f2ESlm0xmszC7PgSYbks5vOt3qgaJpZM4a03gt .
Hi all,
I was interested in doing something similar here (inject some executable Go code in between migrations), and I came up with a solution.
Some context: I've got a folder full of migration sql scripts. The last two are new. I want to execute some custom code in between the last two scripts.
func main() {
migrations := &migrate.PackrMigrationSource{
Box: packr.NewBox("sql"),
}
// the magic happens in here.
m, err := merkleRootMigration(migrations)
if err != nil {
log.Println("couldn't execute special migration setup scripts")
log.Fatal(err)
}
// when special migration logic is completed,
// keep running migrations as necessary
n, err := migrate.Exec(db, "postgres", migrations, migrate.Up)
if err != nil {
log.Println("couldn't execute migrations")
log.Fatal(err)
}
fmt.Printf("Applied %d migrations!\n", n)
}
func merkleRootMigration(migrations migrate.MigrationSource) (int, error) {
n := 0
records, err := migrate.GetMigrationRecords(db, "postgres")
if err != nil {
return n, err
}
// check to see if the last migration executed was the one
// right before we want our special workflow
lastRecord := records[len(records)-1]
if strings.Compare(lastRecord.Id, "20190502111700_last_migration_before_special_workflow.sql") == 0 {
// ONLY EXECUTE 1 MIGRATION FILE
// in my case, this migration will add a new column
n, err = migrate.ExecMax(db, "postgres", migrations, migrate.Up, 1)
if err != nil {
return n, err
}
// perform the intermediate data processing
// in my case, this populates the new column with data,
// converted from an existing column
err = convertRoots()
}
return n, err
}
// using Gorm, so need a struct to represent this "intermediate" schema
// IntermediateProof ...
type IntermediateProof struct {
ID string `gorm:"primary_key"`
MerkleRoot string
MerkleRootBytes []byte
}
// TableName ...
func (IntermediateProof) TableName() string {
return "proofs"
}
func convertRoots() error {
var ips []IntermediateProof
if err := database.Manager.Order("created_at").Find(&ips).Error; err != nil {
return err
}
for _, ip := range ips {
// My use case involved converting a varchar column holding a
// base58 encoded representation of a multihash into a bytea
// column of that decoded data. As far as I can tell that is not
// easy to do in raw sql, which is why I wanted this intermediate
// data processing step
var mh multihash.Multihash
mh, err := multihash.FromB58String(ip.MerkleRoot)
if err != nil {
return err
}
if err := database.Manager.Model(&ip).UpdateColumn("merkle_root_bytes", mh).Error; err != nil {
return err
}
}
fmt.Println("performed", len(ips), "merkle root conversions")
return nil
}
Meanwhile here's a workaround thanks to @nacx for the help.
Somewhere, you define your embedded reference
//go:embed migrations/sqlite/*
var migrations embed.FS
const migrationPath = "migrations/sqlite"
Somewhere else, you use them via http.FS. The trick is using subtree as latest sql-migrate expects files in the root dir.
relative, err := fs.Sub(migrations, migrationPath)
if err != nil {
return err
}
m := sqlmigrate.HttpFileSystemMigrationSource{FileSystem: http.FS(relative)}
Hi, We've been using different libraries to migrate our schema, recently we ran into issues while trying to apply some complex migrate logic. In ROR we wrote such logic using native language (Ruby), which made the migration process more robust. Is there a way or a plan on supporting such feature (Similar to Goose, Boostport/migration etc)?
Please advice, Thanks!