We have a CD server that we have configured Jenkins to deploy to every time the master branch is updated. Currently, our deployment process does not include migrations, but it needs to.
This is complicated by the current approach we're taking: we compile the application on the Jenkins server, copy the built file to the testing server, and then deploy it. We do not expose PostgreSQL on the testing server to the internet, only to localhost. We use the Liquibase gradle plugin to execute migrations, so the migrations need to be run from a checkout of the repo.
The testing server is currently up-to-date on migrations, but of course will not remain so; that means we do not have to worry about any duplicate table definitions from a previously-run seed.sql script.
I see a few options:
open an SSH tunnel from Jenkins to the testing server that connects PostgreSQL, and run the migrations from Jenkins
somehow update a checkout on the testing server (push/pull), and run the migrations from there
Follow-up from #254 Allow database schema changes without losing data
We have a CD server that we have configured Jenkins to deploy to every time the
master
branch is updated. Currently, our deployment process does not include migrations, but it needs to.This is complicated by the current approach we're taking: we compile the application on the Jenkins server, copy the built file to the testing server, and then deploy it. We do not expose PostgreSQL on the testing server to the internet, only to localhost. We use the Liquibase gradle plugin to execute migrations, so the migrations need to be run from a checkout of the repo.
The testing server is currently up-to-date on migrations, but of course will not remain so; that means we do not have to worry about any duplicate table definitions from a previously-run
seed.sql
script.I see a few options:
Follow-up from #254 Allow database schema changes without losing data