Open AlCutter opened 1 week ago
For the mysql prototype I used github.com/golang-migrate/migrate and it worked well. Though to be fair, I didn't exactly stress test it.
Main points of decision making around any schema update tool are:
At the point where the change is made, or at release time? Without thinking about it too much, probably a hybrid is the best; any PRs that change schemas should add an update script. And at release time, we roll all of the intermediate update scripts since the last release into a single update.
We'd want to have some decently sized log that we test each migration script on.
At the moment terraform creates the schemas for gcp. This is a bit of an anti-pattern anyway, but it'll only get worse as we add more complexity to this.
Just to add a data point: The AWS implementation, as it is now, applies the schemas with code, not with Terraform.
Just to add a data point: The AWS implementation, as it is now, applies the schemas with code, not with Terraform.
Good to know. This is potentially even trickier to reason about a migration as without some care, multiple instances could try to perform schema changes at the same time.
For alpha I'm happy with whatever works (so no changes needed to any of the implementations), but for beta we should be targeting a setup where log data can be migrated across versions.
For beta: I'm thinking whether we should use an unified migration tool/approach for all the storage implementation or we should pick the best migration tool/approach for each storage implementation.
https://atlasgo.io/blog/2022/12/01/picking-database-migration-tool
It may be necessary to change the structure or layout of state other than what's prescribed by the tlog-tiles API. E.g. coordination DB schemas may need to be altered for performance or functionality reasons.
We should provide some mechanism to "upgrade" storage so that instances of Tessera based logs do not become unusable when this happens.