stellar-deprecated / bridge-server

Deprecated. Go to https://github.com/stellar/go/tree/master/services/bridge
Apache License 2.0
88 stars 64 forks source link

Bindata for migrations should be generated as part of build #31

Open Mr0grog opened 8 years ago

Mr0grog commented 8 years ago

This is one of two issues that had me pretty much stumped for more than a day. In the 0.0.12 release of bridge server (haven't checked compliance server for this issue yet), the bindata.go file that holds the migrations for mysql is not up-to-date with the actual migration .sql files. So if you call:

./bridge --migrate-db

…as recommended in the readme, you get a DB that doesn't actually work. (Specifically, the SentTransaction table is missing the transaction_id column.)

It seems pretty obvious that this was a result of the .sql getting updated, but someone forgetting to manually run go-bindata -o bindata.go migrations_* from each db/drivers/* directory. I imagine it would be a very good idea to simply script that as part of the build process somehow (I'm not familiar enough with gb to know if there's a way to do that without wrapping it).

Further, maybe it would be a good idea to do a 0.0.13 hotfix-ish release that fixes the SentTransaction table? And it should at least be done in whatever the next normal release is.

(It is worth mentioning: the bindata.go for mysql appears to be up-to-date in master.)

nullstyle commented 8 years ago

@Mr0grog IMO, this is probably just an issue with your inexperience with go and gb. Some reading info: https://blog.golang.org/generate

In this specific case, you simply need to run gb generate whenever you change one of the schema packages. It'll spit out some warnings and will fail, but it will regenerate the bindata. My concern about simply autogenerating the code at build time is that is can cause a ton of diff churn if your generator isn't deterministic.

IMO, our first step should be to document the answer to "I need to change the schema, how do I do it?" and then we can further consider triggering generate automatically. Personally I shy away from having the code generation be automatic, but I am willing to concede in favor of group opinion.

Mr0grog commented 8 years ago

Ah. I had run gb generate, but saw that it failed and so assumed it hadn’t generated the bindata. I’m not sure when I did that, so it’s entirely possible that it did at least do that part, but did it after I had realized mine was out of date.

I do worry, though, about future releases having the same problem (someone forgetting to run gb generate before tagging for release, basically). Maybe a different solution is to have tests for every migration and make sure bridge --migrate-db results in the correct schema?

Or some other simpler way I’m not thinking of to verify that the go source version of the migrations matches the SQL files. ¯_(ツ)_/¯

nullstyle commented 8 years ago

Ostensibly any schema change would also be accompanied by a set of tests that exercise the schema in such a way that a missing migration would result in a test failure. But, seeing as that is not presently the case we should definitely have another solution as well :).

I'll look into integrating some automated tests of the migration system since I'm working on a common schema management package for our projects anyway.

Mr0grog commented 8 years ago

When I filed this I hadn’t realized this was also a problem for the compliance server, so I want to note that the migration that shipped in v0.0.12 for compliance also doesn’t match the first migration in source control at that time. Specifically:

This means the next releases of both bridge and compliance probably need to handle a database where the first migration was “incorrect” (since from the perspective of the migration system, migration 1 was applied, but it didn’t actually do the things we will have expected migration 1 to do).

For context: I originally hadn’t explored and understood how compliance works if the ask_user callback isn’t implemented. The docs say it simply doesn’t pass info back in that situation, but now that I’ve looked into I realize that’s completely incorrect and a whole set of DB tables and an internal HTTP endpoint exist to handle that (a separate issue I’ll file is that this needs to be documented).