Princeton-CDH / cdh-ansible

CDH Ansible playbook repository
Apache License 2.0
1 stars 3 forks source link

work with PUL to upgrade VMs from Bionic to Jammy Jellyfish #138

Closed rlskoeser closed 8 months ago

rlskoeser commented 1 year ago

details from @kayiwa :

Ubuntu bionic VM goes out of support in June The process has been: Set up new Operating System with Jammy Jellyfish. Try to deploy. See what breaks, fix with a PR that accommodates the existence of both Jammy and Bionic

We'll want to test the upgrade in staging first. Francis says he'll do the dependencies work on our PRs. Once everything is working in staging we can schedule an upgrade for production.

projects needing upgrades:

subtasks related to this upgrade

rlskoeser commented 1 year ago

general process:

for production, once we know it runs in staging on jammy:

note: there will be some minimal downtime for the database upgrade (since we're upgrading that as well)

a little different for library apps since they use capistrano for the app deploy where we use ansible

notes

rlskoeser commented 1 year ago

need to handle shared files between instances

need to update the nfs server config with a new cdh mount point and paths (creates a new share and gives cdh servers permission to mount); https://github.com/pulibrary/princeton_ansible/blob/main/group_vars/nfsserver/staging.yml

Then we need to add an ansible step to configure the mount point, like this: https://github.com/pulibrary/princeton_ansible/blob/516c7ce12779d7d23d0dbb73f95b960d4d8d99a8/roles/libwww/tasks/main.yml#L105C1-L116C15

@kayiwa will handle the nfs config and setup but will make sure it's documented in a way that both @kayiwa and @rlskoeser understand.

rlskoeser commented 1 year ago

The old derrida site on RC VM is still running on apache2, but we don't care about that; it looks like shxco is still running on apache2 rather than nginx, but maybe we should just migrate it to nginx so we can get rid of the lingering apache2 ansible stuff.

It looks like apache packages are still installed on some of the VMs where we're not using it anymore (e.g. cdh-prosody1 and cdh-test-prosody1); no need to clean that up manually, since it will get wiped out when we move to jammy and get rid of the bionic vm.

rlskoeser commented 10 months ago

@kayiwa following up on the nfs mount work: I added a new nfs tag so we can easily run the new setup on existing applications, and I tried running on prosody_qa. The chown on the app-specific subfolder fails, and my investigation suggests it may be related to NFS security protection and the root_squash option. Does this sound familiar with other applications and NFS?

Related: is there any way to guarantee that conan gets the same user and group id on both servers within a host group? Since we're setting them up from the same starting point and using the same playbook can we trust that it will be the case?

rlskoeser commented 10 months ago

next steps for prosody qa, once we get the nfs permissions fixed:

rlskoeser commented 9 months ago

@kayiwa trying to move this forward.

How long should it take to copy data to NFS? Is there a faster way to move it than using cp ?

pulsys@cdh-test-prosody1:~$ du -sh /srv/www/data/*
4.1G    /srv/www/data/ht_text_pd
1.6G    /srv/www/data/marc
rlskoeser commented 9 months ago

Documenting for myself: my usual rsync option (-avz) doesn't work for copying content into nfs because of the owner/permissions problem on the older boxes. I checked the man page and found the portions of -a (archive) that we want to use. Here's the command that worked: sudo rsync -rlptDz --stats /srv/www/data /mnt/nfs/cdh/prosody/

rlskoeser commented 9 months ago

updated command syntax for rsync media content into nfs, with overall progress bar:

sudo rsync -rlptDz --stats --info=progress2  /var/www/media /mnt/nfs/cdh/cdhweb/
rlskoeser commented 8 months ago

all CDH VMs have now been upgraded to jammy 🎉