Closed rlskoeser closed 8 months ago
general process:
for production, once we know it runs in staging on jammy:
note: there will be some minimal downtime for the database upgrade (since we're upgrading that as well)
a little different for library apps since they use capistrano for the app deploy where we use ansible
need to handle shared files between instances
need to update the nfs server config with a new cdh mount point and paths (creates a new share and gives cdh servers permission to mount); https://github.com/pulibrary/princeton_ansible/blob/main/group_vars/nfsserver/staging.yml
Then we need to add an ansible step to configure the mount point, like this: https://github.com/pulibrary/princeton_ansible/blob/516c7ce12779d7d23d0dbb73f95b960d4d8d99a8/roles/libwww/tasks/main.yml#L105C1-L116C15
@kayiwa will handle the nfs config and setup but will make sure it's documented in a way that both @kayiwa and @rlskoeser understand.
The old derrida site on RC VM is still running on apache2, but we don't care about that; it looks like shxco is still running on apache2 rather than nginx, but maybe we should just migrate it to nginx so we can get rid of the lingering apache2 ansible stuff.
It looks like apache packages are still installed on some of the VMs where we're not using it anymore (e.g. cdh-prosody1 and cdh-test-prosody1); no need to clean that up manually, since it will get wiped out when we move to jammy and get rid of the bionic vm.
@kayiwa following up on the nfs mount work: I added a new nfs
tag so we can easily run the new setup on existing applications, and I tried running on prosody_qa. The chown
on the app-specific subfolder fails, and my investigation suggests it may be related to NFS security protection and the root_squash
option. Does this sound familiar with other applications and NFS?
Related: is there any way to guarantee that conan
gets the same user and group id on both servers within a host group? Since we're setting them up from the same starting point and using the same playbook can we trust that it will be the case?
next steps for prosody qa, once we get the nfs permissions fixed:
/srv/www/media/
to a new media
folder under the prosody folder on nfs sharemedia_root
config in ansible to reflect the new path/srv/www/data/
to new data
folder under prosody folder on nfs sharedata_path
config in ansible to reflect the new path@kayiwa trying to move this forward.
How long should it take to copy data to NFS? Is there a faster way to move it than using cp
?
pulsys@cdh-test-prosody1:~$ du -sh /srv/www/data/*
4.1G /srv/www/data/ht_text_pd
1.6G /srv/www/data/marc
Documenting for myself: my usual rsync option (-avz
) doesn't work for copying content into nfs because of the owner/permissions problem on the older boxes. I checked the man page and found the portions of -a
(archive) that we want to use. Here's the command that worked: sudo rsync -rlptDz --stats /srv/www/data /mnt/nfs/cdh/prosody/
updated command syntax for rsync media content into nfs, with overall progress bar:
sudo rsync -rlptDz --stats --info=progress2 /var/www/media /mnt/nfs/cdh/cdhweb/
all CDH VMs have now been upgraded to jammy 🎉
details from @kayiwa :
We'll want to test the upgrade in staging first. Francis says he'll do the dependencies work on our PRs. Once everything is working in staging we can schedule an upgrade for production.
projects needing upgrades:
subtasks related to this upgrade