ronivay / XenOrchestraInstallerUpdater

Xen Orchestra install/update script
GNU General Public License v3.0
1.14k stars 189 forks source link

VM Disk Full with syslog and daemon.log #157

Closed marleyjaffe closed 1 year ago

marleyjaffe commented 1 year ago

OS Version: 5.10.0-19 Debian 5.10.149-2 Node.js version: node -v 16.18.1 Yarn version: yarn -v 1.22.19

Server specs amount of vCPUs and RAM on the machine where you attempt to install 2 CPU 4GB RAM 10GB HD

Issue

After a hardware reboot, unable to successfully access the web UI, I am receiving Cannot GET / error in Firefox and Safari, a cert error in Chrome. After attempting update via script receiving this message Write error: No space left on Device. After doing some investigation I found that /var/log was taking up 7.1GB! The majority of this data was in syslog and daemon.log. Online research seems to indicate that I can delete these files with out issue, but how do we prevent them from getting this big to begin with? I'm using XOA in the meantime. I see this line at the top of both logs: Dec 4 00:00:28 xo-ce systemd[1]: Finished Rotate log files.

393383 -rw-r-----   1 root  adm             3.4G Dec 25 01:51 daemon.log
393379 -rw-r-----   1 root  adm             3.4G Dec 25 01:51 syslog
xo@xo-ce:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           391M   40M  352M  11% /run
/dev/xvda1      9.8G  9.8G     0 100% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           391M     0  391M   0% /run/user/1000
ronivay commented 1 year ago

Hi,

I assume this is the XO VM image you're using, right? It seems that the logrotate hasn't rotated the logs since Dec 4th if that's what you see in the top of both logfiles. Why that is happening is more difficult to guess, you should look through the logs if you can spot any reason for this. Also you should check the logfile contents to identify what kind of messages are filling them, is something flooding to them constantly? Several gigabytes in few weeks sounds a bit unusual.

Have you made any changes to the VM configuration after deploying it? Tested with the latest image and can't see anything wrong with logrotate which keeps these logfile sizes manageable by eventually compressing and deleting old content.

To solve the issue after you've used the files to investigate the situation further, you can truncate those logfiles with truncate -s0 /var/log/syslog && truncate -s0 /var/log/daemon.log to make them empty and free disk space. Maybe best to restart rsyslogd after that with systemctl restart rsyslogd.

github-actions[bot] commented 1 year ago

This issue has been open for 14 days without activity. It will be closed in 5 days if not updated