Open Mr0grog opened 8 years ago
@Mr0grog IMO, this is probably just an issue with your inexperience with go and gb. Some reading info: https://blog.golang.org/generate
In this specific case, you simply need to run gb generate
whenever you change one of the schema packages. It'll spit out some warnings and will fail, but it will regenerate the bindata. My concern about simply autogenerating the code at build time is that is can cause a ton of diff churn if your generator isn't deterministic.
IMO, our first step should be to document the answer to "I need to change the schema, how do I do it?" and then we can further consider triggering generate automatically. Personally I shy away from having the code generation be automatic, but I am willing to concede in favor of group opinion.
Ah. I had run gb generate
, but saw that it failed and so assumed it hadn’t generated the bindata. I’m not sure when I did that, so it’s entirely possible that it did at least do that part, but did it after I had realized mine was out of date.
I do worry, though, about future releases having the same problem (someone forgetting to run gb generate
before tagging for release, basically). Maybe a different solution is to have tests for every migration and make sure bridge --migrate-db
results in the correct schema?
Or some other simpler way I’m not thinking of to verify that the go source version of the migrations matches the SQL files. ¯_(ツ)_/¯
Ostensibly any schema change would also be accompanied by a set of tests that exercise the schema in such a way that a missing migration would result in a test failure. But, seeing as that is not presently the case we should definitely have another solution as well :).
I'll look into integrating some automated tests of the migration system since I'm working on a common schema management package for our projects anyway.
When I filed this I hadn’t realized this was also a problem for the compliance server, so I want to note that the migration that shipped in v0.0.12 for compliance also doesn’t match the first migration in source control at that time. Specifically:
AllowedFI
tableAllowedUser
tableAuthorizedTransaction.memo
column is allowed to have nullsThis means the next releases of both bridge and compliance probably need to handle a database where the first migration was “incorrect” (since from the perspective of the migration system, migration 1 was applied, but it didn’t actually do the things we will have expected migration 1 to do).
For context: I originally hadn’t explored and understood how compliance works if the ask_user
callback isn’t implemented. The docs say it simply doesn’t pass info back in that situation, but now that I’ve looked into I realize that’s completely incorrect and a whole set of DB tables and an internal HTTP endpoint exist to handle that (a separate issue I’ll file is that this needs to be documented).
This is one of two issues that had me pretty much stumped for more than a day. In the 0.0.12 release of bridge server (haven't checked compliance server for this issue yet), the
bindata.go
file that holds the migrations for mysql is not up-to-date with the actual migration.sql
files. So if you call:…as recommended in the readme, you get a DB that doesn't actually work. (Specifically, the
SentTransaction
table is missing thetransaction_id
column.)It seems pretty obvious that this was a result of the
.sql
getting updated, but someone forgetting to manually rungo-bindata -o bindata.go migrations_*
from eachdb/drivers/*
directory. I imagine it would be a very good idea to simply script that as part of the build process somehow (I'm not familiar enough withgb
to know if there's a way to do that without wrapping it).Further, maybe it would be a good idea to do a
0.0.13
hotfix-ish release that fixes theSentTransaction
table? And it should at least be done in whatever the next normal release is.(It is worth mentioning: the
bindata.go
for mysql appears to be up-to-date in master.)