Closed vpicavet closed 7 years ago
First release in https://github.com/qwat/qwat-data-model/tree/master/update.
Use: python test_migration.py --pg_service qwat
The command returns the validity of the current model tested (OK or NOT OK). If NOT OK, a diff is launched (diff exe can be given in parameter)
NOTE: test_migration.expected.sql file has to be fixed on each qWat version, as it is use a reference. To update test_migration.expected.sql, just launch "test_migration.sql" on the database, and redirect output to the file.
@sylvainbeo glad to see this developped!!! before closing the issue, it would be good to:
old_model + deltas = new_model
For each commit, one should provide:
Will the expected dump will be automatically created?
If we agree that everything should go through a PR, I think we should have a unit test that achieve this. Hence, it's probably not required to keep the expected dump on the git repo. I think it's important to generate it live, otherwise each commit will produce a new version of the dump, and the physical size of the git repo will increase quite quickly.
Another side note regarding modification to the views.
If one commits a modif to a view, the delta is basically just running the rewrite_views.sh
script, it's not necessary to provide a SQL delta. I think it's required to handle this properly.
Working on it. I try to cover all cases before commiting.
Migration process is now automatic. No more need for the pre-generated test_migration.expected.sql.
MIgration is tested on 2 tests DB. If it's ok, the process can be run on the main DB.
See the upgrade_db.README, and the documentation.
Good job, have you the intention of doing the test on travis? I think that for every commit the test should ensure that current db = previous version + all the deltas. Hence if anyone pushes directly a change in the file, we get directly a mail saying it's broken. Adding the delta in a later commit would then fix it (hopefully ;) ). Thoughts?
Can be done indeed.
Test shoud ensure that current db = previous version + (all the deltas >= current db_version). update/delta directory may contains all deltas, including old ones. The migration process take in account the version of the current DB, and the nums contained in the deltas file's names.
I think we should test deltas only in current major version. As far as I see it, there is 2 solutions:
Second sounds better to me, but I am no expert on this topic
What do you mean by "we should test deltas only in current major version" ? The purpose of the deltas is to keep an evolution of the model, and be able to replay that evolution on a random state of the DB (according to the version number). Are we ok for that ?
Doing a checkout on the git repo is ok with an automatic test, but maybe not with a test the user do on his own. We are in that case, and I'm not very confortable with that. But we can talk about it.
Indeed we agree on the purpose of the deltas. I just meant that we don't need to test deltas on 1.x.x if we are on 2.x.x while we can still keep them in the repo. I would say that going from 1.x.x to 2.x.x might not be feasible with deltas, this can be seen as hard upgrade in postgis.
git checkout might not be the way to go sorry, the idea would be to download a release tarball, create the database and then run the deltas.
Yes you're right. Major migrations won't be possible just with deltas. Before that, to improve that first version of the automatic migration process, we should do some real tests. That way, we can see what's ok, and what's not. In particular, the SQL request i do on the DB to detect changes. I made tests to check they are the good ones, but some maybe be missing. Would be cool to have more feedback.
It will make things much easier if we have the testing on travis.
The idea of the workflow is:
Like this you can commit whenever you want, it should fail on travis until you get the proper deltas. Whenever it's ready, you can push your changes to master.
Oh, we are talking 2 differents things. You're right for the developper side. The workflow you described is good. I'm going to add it in the documentation.
Before that, I was talking about the user side (the user who installed QWAT 6 month ago, and wants to upgrade, and don't know anything about git).
I don't think we have to care about them ;) More seriously, going with dump/restore might do the job.
Working on the process. We will have to tag a first version of the model. Else, the process cannot work.
Tested successfully in a separate simplified environnement : https://github.com/sylvainbeo/test
Ok. Process added to the git. You can see a real test working by checking the sbe_update branch ( https://travis-ci.org/qwat/qwat-data-model) . Feedbacks are welcome.
Model tags must be named : x.x.x, and major versions : x.0.0 This is important for the process to work.
I added a tag on the actual data model (1.0.0)
Thanks a lot for your work!!! I am very glad to see this implemented!
I miss a bit of documentation on how shall I proceed to pull the request (shall I tag already the new version). Is the version harcoded somewhere (couldn't find it)?
I think the general process on how to developp is clear (to me): create a branch with all your changes (can be in several commits), create the delta, pull a request with everything.
Now, I just miss the part on how I give a version to my work.
I just submitted a PR to test this live https://github.com/qwat/qwat-data-model/pull/121
One thing, in the checking processes, you might want to run ./ordinary-data/views/rewrite_views.sh This will allow rewriting all the views (hence, avoiding to rewrite everything in the delta).
Please let me know and thanks again!
Indeed. The version is not hardcoded in the DB. Version is only a tag on the git repository. Maybe we could add the version somewhere.
mmm in fact, i just saw that there is already a table version in qwat_sys schema.
right, Vincent did this at some point. Would be good to keep it up to date:
The delta script should take care of doing so, and the test should check at the end that versions correspond. Also, it might be good to check that we do not skip a delta, no? i.e. there should be a start version in the delta: delta-1.0.0-to-1.0.1.sql what do you think?
I added some points and listed them as a TODO list on the header post of the PR. @sylvainbeo waiting on your feedback on this
@sylvainbeo I fixed some issues:
$TRAVIS_COMMIT
and $TRAVIS_PULL_REQUEST_SHA
. Hence we work on the latest status on the PR test.Deltas are well executed in order. I just tested that in a bash to be sure.
Delta now implements system version update (harcoded) and conformity test check the version matches
Perfect. Are you sure functions triggers are properly tested ? I think we should have some error due to left functions triggers of vw_element_valve if not deleted in the delta. What do you think ?
You're right. We should delete manually the vw_element_valve triggers in the delta file.
so....the migration test should fail at the moment, while it is not. Hence, I suppose there is an issue with the test.
Yes. I don't now for the moment where the problem comes from.
Ok. In fact there's not errors. As we mentionned earlier, the FT function are not listed in the verification (because they are automatically generated, hence, no possible errors on that part). That's why the remaining ft_element_valve_insert, delete and update don't break the integration process.
We just have to add the deletion of those 3 in the delta.
Do we agree ?
All right. We probably disable checking these because adding fields also make the view and triggers check fails due to fields ordering. Not 100% perfect, but I think it's ok!
Agree. I think we can close this issue, merge the PR and maybe open another issue, to keep trace of the remaining work to do in the future ?
Fine with me, the remaining bits are related to demo data anyway.
good job!
We should build a test environment, which verifies that model deltas applications are equivalent to current model version.
This depends on #97
Still missing