Closed markusobi0 closed 11 months ago
The issue will be resolved in the next major version update as v2 does include a lot of perfomrance fixes and support for example for embedded mariadb. See our milestone: https://github.com/louislam/uptime-kuma/milestone/24
I am unsure why your instance is not vaccuming correctly. 7 days x 100 monitiors should not lead to > 1GB of storage.
You could try VACUUM;
in the sqlite console.
If you try to do a vacuum, the instance will crash without any error message. Maybe someone has an idea how to improve that.
Are there any healthchecks active? If they are have you tried this operation without them active?
How can I turn off the healthchecks?
yes, please refer to the documentation of your runtime to know how. For example docker has the --no-healthcheck
flag
This helped a lot. It's now down to 300MB and much faster. Thanks.
⚠️ Please verify that this bug has NOT been raised before.
🛡️ Security Policy
📝 Describe your problem
I'm monitoring 96 services with Uptime Kuma. After 1.5 years the database has grown to about 8GB. The result is that the instance is extremely slow: The web panel loads after 5 minutes, if it loads at all. The monitoring history is set to 7 days at the moment. If you try to do a vacuum, the instance will crash without any error message. Maybe someone has an idea how to improve that.
📝 Error Message(s) or Log
[...] sql: 'UPDATE monitor SET active = 1 WHERE id = ?', bindings: [ '91' ] } at process.unexpectedErrorHandler (/app/server/server.js:1894:13) at process.emit (node:events:517:28) at emit (node:internal/process/promises:149:20) at processPromiseRejections (node:internal/process/promises:283:27) at processTicksAndRejections (node:internal/process/task_queues:96:32) at runNextTicks (node:internal/process/task_queues:64:3) at listOnTimeout (node:internal/timers:538:9) at process.processTimers (node:internal/timers:512:7) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:572:22) at async RedBeanNode.exec (/app/node_modules/redbean-node/dist/redbean-node.js:536:9) at async UptimeKumaServer.startMonitor (/app/server/uptime-kuma-server.js:412:9) at async UptimeKumaServer.restartMonitor (/app/server/uptime-kuma-server.js:434:16) { sql: 'UPDATE monitor SET active = 1 WHERE id = ?', bindings: [ '92' ] } at process.unexpectedErrorHandler (/app/server/server.js:1894:13) at process.emit (node:events:517:28) at emit (node:internal/process/promises:149:20) at processPromiseRejections (node:internal/process/promises:283:27) at process.processTicksAndRejections (node:internal/process/task_queues:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
🐻 Uptime-Kuma Version
1.23.4
💻 Operating System and Arch
Ubuntu 20.04 x64
🌐 Browser
Google Chrome 119.0.6045.124
🐋 Docker Version
Docker Standalone 23.0.1
🟩 NodeJS Version
No response