linuxserver / docker-radarr

GNU General Public License v3.0
660 stars 103 forks source link

[BUG] Ver. 5.11.0.9244-ls241 spams logs and sends CPU to 100% #237

Open NightHawkATL opened 1 month ago

NightHawkATL commented 1 month ago

Is there an existing issue for this?

Current Behavior

Radarr will randomly start using 100% CPU on the VM I have all of my Arrs installed on and will cause issues. I have to go into Portainer and restart the container to get it to stop. At first it would fill up the swap so I disabled that 2 days ago then it started hitting 100% CPU yesterday and today. The VM has 4 vCPU and 16GB RAM. I am also running a different 4K instance for Radarr and it is not exhibiting the same behavior as this one. It is installed as a part of the same stack. image image

Expected Behavior

that is won't use 100% CPU for one container. Don't know what else to put here.

Steps To Reproduce

it is random so I would say that you setup a VM with Ubuntu 22.04 and install Docker, Compose and Portainer and then disable the swap space on the VM. Then let Radarr run for a day or two and it should peg the CPU to 100%

Environment

- OS: Ubuntu 22.04
- How docker service was installed: As a stack with all of the Arrs in one stack and on the same docker network.

CPU architecture

x86-64

Docker creation

# movie management      
  radarr:
    image: ghcr.io/linuxserver/radarr:latest
    container_name: radarr
    restart: unless-stopped
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - /portainer/Files/AppData/Config/radarr:/config
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone      
      - /home/{redacted-user}/stuff:/stuff
      - /home/{redacted-user}/backup:/backup
    ports:
      - 7878:7878

Container logs

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:24 +00:00", the heartbeat has been running for "00:00:01.3019212" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:27 +00:00", the heartbeat has been running for "00:00:01.6742217" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:33 +00:00", the heartbeat has been running for "00:00:01.6027067" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:38 +00:00", the heartbeat has been running for "00:00:01.6800331" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:43 +00:00", the heartbeat has been running for "00:00:01.7568994" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:47 +00:00", the heartbeat has been running for "00:00:01.7845453" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:53 +00:00", the heartbeat has been running for "00:00:01.2124929" which is longer than "00:00:01". This could be caused by thread pool starvation. ```
github-actions[bot] commented 1 month ago

Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.

NightHawkATL commented 4 weeks ago

?