jsclayton / prometheus-plex-exporter

Export metrics from your Plex Media Server
GNU Affero General Public License v3.0
117 stars 13 forks source link

Slow prometheus scraping #7

Closed ammmze closed 1 year ago

ammmze commented 1 year ago

I'm not sure what changed recently, but I keep getting alerts about the plex exporter being down. In attempt to debug with as few variables as I could I jumped into the pod and used curl to fetch localhost:9000/metrics. Most cases I got a response after about 1 minute. Any idea what may be going on?

/ # curl -so /dev/null -w "\
>    namelookup:  %{time_namelookup}s\n\
>       connect:  %{time_connect}s\n\
>    appconnect:  %{time_appconnect}s\n\
>   pretransfer:  %{time_pretransfer}s\n\
>      redirect:  %{time_redirect}s\n\
> starttransfer:  %{time_starttransfer}s\n\
> -------------------------\n\
>         total:  %{time_total}s\n" localhost:9000/metrics
   namelookup:  0.000053s
      connect:  0.000165s
   appconnect:  0.000000s
  pretransfer:  0.000239s
     redirect:  0.000000s
starttransfer:  59.804658s
-------------------------
        total:  59.804780s
/ # curl -so /dev/null -w "\
>    namelookup:  %{time_namelookup}s\n\
>       connect:  %{time_connect}s\n\
>    appconnect:  %{time_appconnect}s\n\
>   pretransfer:  %{time_pretransfer}s\n\
>      redirect:  %{time_redirect}s\n\
> starttransfer:  %{time_starttransfer}s\n\
> -------------------------\n\
>         total:  %{time_total}s\n" localhost:9000/metrics
   namelookup:  0.000063s
      connect:  0.000156s
   appconnect:  0.000000s
  pretransfer:  0.000215s
     redirect:  0.000000s
starttransfer:  58.001007s
-------------------------
        total:  58.001083s
/ # curl -so /dev/null -w "\
>    namelookup:  %{time_namelookup}s\n\
>       connect:  %{time_connect}s\n\
>    appconnect:  %{time_appconnect}s\n\
>   pretransfer:  %{time_pretransfer}s\n\
>      redirect:  %{time_redirect}s\n\
> starttransfer:  %{time_starttransfer}s\n\
> -------------------------\n\
>         total:  %{time_total}s\n" localhost:9000/metrics
   namelookup:  0.000114s
      connect:  0.096899s
   appconnect:  0.000000s
  pretransfer:  0.097834s
     redirect:  0.000000s
starttransfer:  33.099731s
-------------------------
        total:  33.099826s
/ # curl -so /dev/null -w "\
>    namelookup:  %{time_namelookup}s\n\
>       connect:  %{time_connect}s\n\
>    appconnect:  %{time_appconnect}s\n\
>   pretransfer:  %{time_pretransfer}s\n\
>      redirect:  %{time_redirect}s\n\
> starttransfer:  %{time_starttransfer}s\n\
> -------------------------\n\
>         total:  %{time_total}s\n" localhost:9000/metrics
   namelookup:  0.000056s
      connect:  0.000179s
   appconnect:  0.000000s
  pretransfer:  0.000253s
     redirect:  0.000000s
starttransfer:  58.697103s
-------------------------
        total:  58.697178s
/ # curl -so /dev/null -w "\
>    namelookup:  %{time_namelookup}s\n\
>       connect:  %{time_connect}s\n\
>    appconnect:  %{time_appconnect}s\n\
>   pretransfer:  %{time_pretransfer}s\n\
>      redirect:  %{time_redirect}s\n\
> starttransfer:  %{time_starttransfer}s\n\
> -------------------------\n\
>         total:  %{time_total}s\n" localhost:9000/metrics
   namelookup:  0.000049s
      connect:  0.000179s
   appconnect:  0.000000s
  pretransfer:  0.000243s
     redirect:  0.000000s
starttransfer:  60.750820s
-------------------------
        total:  60.750975s
/ #
ammmze commented 1 year ago

Oh just realized it was using a lot of CPU. I think it's related to the amount of CPU I gave the pod and it was throttling. I had a limit of 100M on there. Bumping it to 500M i'm getting a response in about 7-8s. Looks like its using 280M cpu now.