louislam / uptime-kuma

A fancy self-hosted monitoring tool
https://uptime.kuma.pet
MIT License
55.99k stars 5.04k forks source link

`SQLITE_FULL` while Attempting to Clear Monitor History. #4581

Closed Cmillets closed 6 months ago

Cmillets commented 6 months ago

⚠️ Please verify that this question has NOT been raised before.

🛡️ Security Policy

📝 Describe your problem

My Server dedicated to Uptime Kuma is completely full. I tried clear ALL statistics and get an error (disk full)

image

Shrink database gives the same issue.

How do I clear Monitoring history from the server's cli interface without losing my database setup?

Uptime Kuma is running on docker.

📝 Error Message(s) or Log

image

🐻 Uptime-Kuma Version

Version: 1.23.11

💻 Operating System and Arch

Linux 3.10.0-862.el7.x86_6

🌐 Browser

Google Chrome

🖥️ Deployment Environment

CommanderStorm commented 6 months ago

Interesting According to https://sqlite-users.sqlite.narkive.com/fo3fnX5n/sqlite-delete-record-fail-when-disk-full

While performing any change operation, SQLite will first back-up the original portions to a second rollback file, usually in multiples of 1K. This file disappears or resets when you commit the current transaction. If the disk is full, then this file creation fails, so no changes will work.

Do you have the option to temporarily add more storage?

If not, try pausing ALL monitors and then start running this command (docker exec ...) with incrementally larger numbers until the clearing in the UI works.

DELETE FROM HEARTBEAT LIMIT 1 

If you have a status page, there might be a custom icon deleting which might allow for some wiggle room, but likely you have some log/cache files which are larger than this (you can list the size of directories via du -sh /*).

It might also be a simpler issue to backup the whole data directory to a larger disk, then reduce the data stored and copy it back (shut down uptime kuma before you copy ^^)

Cmillets commented 6 months ago

@CommanderStorm Thank you for your reply!

What is the exact command you mentioned above? I can't get it to run...

I'm working with a very limited 10GB. Once I clear up some room, I want to change the Monitor History storage to 30days. Right now my docker storage looks like:

var/lib/docker - 3.2G var/lib/docker/overlay2 - 1.4G var/lib/docker/volumes/uptime-kuma - 1.8G

Do I need kuma.db-wal or kuma.db-shm

I'm not very familiar with Linux your patience is much appreciated.

CommanderStorm commented 6 months ago

The WAL (Write Ahead Log) and SHM (Shared-Memory Files) are files by sqlite.

see https://www.sqlite.org/tempfiles.html (not trying to be a smart ass, I think their docs are just better at explaining the content):

A write-ahead log or WAL file is used in place of a rollback journal when SQLite is operating in WAL mode. As with the rollback journal, the purpose of the WAL file is to implement atomic commit and rollback. The WAL file is always located in the same directory as the database file and has the same name as the database file except with the 4 characters "-wal" appended. The WAL file is created when the first connection to the database is opened and is normally removed when the last connection to the database closes. However, if the last connection does not shutdown cleanly, the WAL file will remain in the filesystem and will be automatically cleaned up the next time the database is opened.

The shared-memory file contains no persistent content. The only purpose of the shared-memory file is to provide a block of shared memory for use by multiple processes all accessing the same database in WAL mode. If the VFS is able to provide an alternative method for accessing shared memory, then that alternative method might be used rather than the shared-memory file. For example, if PRAGMA locking_mode is set to EXCLUSIVE (meaning that only one process is able to access the database file) then the shared memory will be allocated from heap rather than out of the shared-memory file, and the shared-memory file will never be created.

The shared-memory file has the same lifetime as its associated WAL file. The shared-memory file is created when the WAL file is created and is deleted when the WAL file is deleted. During WAL file recovery, the shared memory file is recreated from scratch based on the contents of the WAL file being recovered.

=> do you need the content that is stored in the WAL? What was happening when the disk went full?

CommanderStorm commented 6 months ago

Once I clear up some room

Given that docker is the largest space-hog on your system, try the following

docker system prune

or more extreme

docker system prune -a
Cmillets commented 6 months ago

@CommanderStorm Thanks for the link!

I figured out what's going on. I mistakenly installed Uptime Kuma under the root directory, which only has 10 gigs partitioned. How would I go about moving all of docker and uptime kuma over to another volume partition? Are there instruction available?

CommanderStorm commented 6 months ago

How would I go about moving all of docker and uptime kuma over to another volume partition

I don't know about docker. (please read their docs on how to configure docker ^^)

You can move volumes by creating a container, mounting source and destination and then moving between the directories.

Cmillets commented 6 months ago

@CommanderStorm Thanks for helping me troubleshoot!

I had to follow this guide in order to move Docker.

Then I followed the update guide.

Everything is now working as expected.

feritkirikci commented 1 month ago

I tried them all one by one. The droplet I got from Digitalocean was limited to 10gb. I increased it to 50gb but ubuntu did not automatically increase the size of the 1st disk /dev/vda1. I understood that it did not perform any operation because there was no space left on the disk. I redefined the disk size with fdisk, restarted docker and re-set the uptime control. There was no lag or slowdown. 621 sites were added and it worked without any problems. It became 1.8gb database. I fixed the operation history with 7 days. I said shrink database and it became 19mb. Thanks for your help.