Closed joereddington closed 1 year ago
Let's work on this.
I ssh into the server and run
df -h
to work out how much space is a problem. I get this:
Yep - that looks like there really is just too much in the drive.
Two possible solutions:
Okay, I'm going to end up doing the first one sooner or later, but I don't want to risk data loss, so let's investigate the first one...
The command sudo find / -type f -size +100M
finds me this:
So that's a tiny bit of breathing space.
Before I expand the drive I want to do some good backups. I do the AWS image first.
(I do this periodically anyway)
Because recovering from one of those snapshots might involve some tricky admin, I also periodically take a more old-school backup of the data by creating a encrypted backup to download locally with `sudo zip -e backup.zip html/designs/*.obz -t 2023-02-01'. However I don't think I have space for that, so I'm going to try run and run it from my local machine:
ssh XXX @theopenvoicefactory.org 'find /usr/share/nginx/html/designs -name "*.obz" -newermt 2023-03-01 -print0 | tar -czf - --null -T -' >backup.tar.gz
(I've taken out some secruity stuff)
Okay I've checked the local backup has the files I expect in, and the AWS image is:
So now I have to expand the drive. I've done this before for https://whitewaterwriters.com/ so should be easy enought.
In fact, I have the following entry in my logs:
06/01/22 18:59 to 19:29, Working on https://github.com/eQualityTime/Public/issues/139#issuecomment-1008743681 +EQT
I extended the size of the partition with these commands 1025 du -h 1026 df -h 1027 df -hT 1028 lsblk 1029 sudo growpart /devxvda 1 1030 sudo growpart /dev/xvda 1 1031 lsblk 1032 pwd 1033 df -h 1034 lsblk 1035 df -hT 1036 sudo xfs_growfs -d / 1037 df -hT
Which should help.
I've used the AWS website to do the increase:
...and they direct you to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html so that you can put in the console commands to actually expand it.
I used the following commands:
2354 10/05/23 11:07:25 sudo lsblk 2355 10/05/23 11:10:14 df -hT 2356 10/05/23 11:12:42 sudo growpart /dev/xvda 1 2357 10/05/23 11:12:56 sudo lsblk 2358 10/05/23 11:13:07 df -hT 2359 10/05/23 11:13:19 sudo xfs_growfs -d / 2360 10/05/23 11:13:23 df -hT
and that seemed to work. Let's do a user-check.
That's not good. The message didn't go away for a few minuts so I sudo systemctl restart nginx
and everything comes back. I create a test aid and everything is lovely.
An issue came in by email:
It's been independently tested and I get this:
It should be a simple fix right?