Closed saulius closed 11 years ago
Pivotal story?
Sorry forgot it, it's for this one https://www.pivotaltracker.com/story/show/38026757.
It feels like there are a few changes here; schema changes related to the new feature, code to take advantage of the new data, and data changes which might need further additions.
I'll have a chat with other people here about how they're doing bulk data upload like that; whitehall and EFG have a separation between schema migrations and data migrations. It feels like we might want something like that to bulk slurp up some content. I'm ambivalent about large data files being in git, but we probably want to decide how we'd like that to work.
This is more than just a data migration we're going back in time to add in new seed data. There not that many options, either rebuild the entire things from scratch or replace the db with an updated snapshot which as been rebuilt for these tables.
I added a story for data migrations recently https://www.pivotaltracker.com/story/show/43132661. I want to separate data migrations from schema changes. I have some prototype quality code already that does this.
I uploaded a snapshot for this (tariff_development-2013-01-30_national_quantities.sql.bz2) in case we don't find other solution. I made it locally on development database so find and replace is needed.
It looks like we'll have to do this as a new DB snapshot.
Can you sketch out how you think the deployment will proceed?
I'm wondering in particular if we need to have code that will run with both the existing schema and new schema to begin with, then do the database update, then remove the code to work with the old schema.
@jabley I think this deployment does not have to be that complicated. National quantity assignment is not part of the transformation process, so we just need records in TBL9 and COMM. So I think I could just dump those two tables (~45MB). So then we would just need to migrate the schema, import this two table dump and check how things are looking. In case of a problem just rollback.
Regarding the deploys when we actually need to import a new snapshot I tend to lean towards the Blue-Green approach you mentioned. What's unfortunate is that MySQL does not support database renaming in 2013. Here's what I what I found to be closest to renaming, but haven't tried running it yet http://blog.shlomoid.com/2010/02/emulating-missing-rename-database.html. So in this case I would imagine the process to go as follows:
can we rebase this on master?
Sure, just did that.
@sauliusg do we have a dump of just the missing tables to load into preview?
I uploaded a snapshot called tariff_development-national_quantities.sql.bz2. Contains just the inserts to those two tables. Exported with:
mysqldump -u root --no-create-db --no-create-info tariff_development chief_comm chief_tbl9 > tariff_development-national_quantities.sql
Btw, there is related PR for the frontend https://github.com/alphagov/trade-tariff-frontend/pull/52.
@jabley we just need to load the national quantities dump onto production for deployment now. Testing on preview
This adds support for national quantities. Adds two new chief tables (chief_comm and chief_tbl9). COMM table is a join table between goods_nomenclatures and TBL9. TBL9 contains descriptions for national quantity units.
National quantities apply to national excise measures. Commodities that have national quantities (for checking after deploy):
There may be fewer or more of these, I'm not checking validity dates in this query, but it is a good reference.
There are two problems however:
The format of national quantity unit description is horrible, e.g.:
Uppercase, weird spacing. I don't think we should be altering CHIEF data, but we could add 'normalizer' in the frontend.