louislam / uptime-kuma

A fancy self-hosted monitoring tool
https://uptime.kuma.pet
MIT License
52.3k stars 4.71k forks source link

uptime kuma 100% usage of cpu #4094

Open syamsullivan opened 7 months ago

syamsullivan commented 7 months ago

⚠️ Please verify that this bug has NOT been raised before.

🛡️ Security Policy

📝 Describe your problem

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 70b8a69b1ae8 uptime-kuma-saas 103.76% 133.9MiB / 7.637GiB 1.71% 991kB / 8.08MB 41kB / 129MB 12

i have issue with uptimekuma that used only single core, then affected performance dashboard

any suggestion

im using docker Docker version 24.0.5, build ced0996 centos 7 with 8 cores 8G

📝 Error Message(s) or Log

No response

🐻 Uptime-Kuma Version

1.22.1

💻 Operating System and Arch

CentOS Linux release 7.9.2009

🌐 Browser

Version 117.0.5938.88

🐋 Docker Version

Docker version 24.0.5

🟩 NodeJS Version

No response

chakflying commented 7 months ago

You can post the container logs, the output of htop when run inside the container, and the number and types of monitors you are running to help with troubleshooting.

CommanderStorm commented 7 months ago

*also include the retention time you configured

syamsullivan commented 7 months ago

is retention will cost the CPU ?

and also i used docker as main platform for deploy kuma. and always cost cpu 100%, should i increase the CPU limit of container ?

CommanderStorm commented 7 months ago

@syamsullivan please give us the information we asked for. See https://github.com/louislam/uptime-kuma/wiki/Troubleshooting if you need help getting this information.

is retention will cost the CPU ?

Retention is not a likely culprit. Please report it anyway.

should i increase the CPU limit of container ?

Depends what you set your limits to. One CPU is the max node should use. Note that CPU limits were originally designed to curb power consumption in large datacenters. Use this feature of your runtime with caution.

bmdbz commented 5 months ago

I have the same problem. The problem was exacerbated when I logged into WebUI. Generally, after I restart the docker container, I can log in to WebUI and see the contents of the monitoring items normally. After a period of time (maybe 15 minutes or less), I will open WebUI again and the interface will not display any monitoring items. (But the monitoring task is actually still running) Number of my monitoring items 500+ Mainly because uptime-kuma is easier than zabbix tools, but the CPU utilization rate of 100% makes me unable to start.

CommanderStorm commented 5 months ago

@bmdbz Could you report the values for:

Note that the first beta of v2.0 is still a few weeks out, but said release will come with a lot of performance improvements. In v1, 500+ (depending what "+" means) is likely pushing it.

bmdbz commented 5 months ago

Thank you for your reply.

In v1, 500+ means more than 500.

bmdbz commented 5 months ago

image The above is the output screenshot of htop, thank you!

CommanderStorm commented 4 months ago

missed this response. The htop output you reported is sorted by memory, could you sort by CPU utilisation instead? In the screenshot, this is not 100%, but rather 30%

cayenne17 commented 3 months ago

I just noticed the same problem. When I don't have the uptime-kuma web interface open, I'm in the 5%~ CPU range: image

When I have a tab open in the background with no actions on it, it's a variable 30%-70% CPU: image



Uptime kuma is installed in a Docker version 25.0.4, build 1a576c5 on a Debian 12.5 VM.

root@UptimeKuma:~# docker -v
Docker version 25.0.4, build 1a576c5

root@UptimeKuma:~# cat /etc/debian_version 
12.5

Uptime Kuma Version: 1.23.11 Version frontend: 1.23.11

AVG VM CPU graph from Proxmox VE: image

sunlewuyou commented 2 months ago

Non-Docker image

github-actions[bot] commented 6 days ago

We are clearing up our old help-issues and your issue has been open for 60 days with no activity. If no comment is made and the stale label is not removed, this issue will be closed in 7 days.

cayenne17 commented 4 days ago

The problem still exists

CommanderStorm commented 4 days ago

This is likely resolved by the performance improvement in #4500, more specific https://github.com/louislam/uptime-kuma/pull/3515

Testing PRs can be done via https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests, but I don't expect that due to you needing to create 500 monitors without good import/export functionality.

I have changed this to a FR to avoid stalebot doing shit.

What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about

cayenne17 commented 4 days ago

What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about

  • how many monitors do you have configured
  • what is their type
  • what is your retention

@CommanderStorm

how many monitors do you have configured ? 74 online, 2 offline and 5 on pause

what is their type ? It's mostly ICMP probes and a few HTTPS probes

what is your retention ? 30 days