Closed akston closed 2 years ago
Have you seen the /api/stats endpoint?
I have, and while it's certainly useful, it's not in the Prometheus / OpenMetrics format that's so prevalent in the industry. If you're running Frigate in a Docker or Kubernetes environment such as I am, it seems like a natural progression.
I realize that I'm a sample size of one, so I'm okay with tabling this if there's no other interest.
It would be easy to add, I'm just not familiar with the standard format. Perhaps you can suggest what the format should be based on the info currently available in the stats endpoint.
Great Idea! I'm no expert in Prometheus, but I wrote once a working exporter mining for unraid. So maybe this helps:
Your Api Stats should look like this:
# HELP frigate_suptime_total UpTime
# TYPE frigate_suptime_total counter
frigate_suptime_total 981849
# HELP frigate_detection_fps Vertical Refresh Rate
# TYPE frigate_detection_fps gauge
frigate_detection_fps 5.0
# HELP frigate_detector_start detection_start
# TYPE frigate_detector_start gauge
frigate_detector_start{detector="coral",count="1",type="coral"} 0.0
# HELP frigate_inference_speed detection_start
# TYPE frigate_inference_speed gauge
frigate_detector_inference_speed{detector="coral",count="1",documentation="coral"} 8.32
# HELP frigate_pid pid
# TYPE frigate_pid gauge
frigate_detector_pid{detector="coral",count="1",documentation="coral"} 242
# HELP frigate_camera_fps Camera FPS
# TYPE frigate_camera_fps gauge
frigate_camera_fps{name="frontcam", pid="250", type="fps"} 10.0
frigate_camera_fps{name="frontcam", pid="250", type="detection_fps"} 0.0
frigate_camera_fps{name="frontcam", pid="250", type="process_fps"} 0.0
frigate_camera_fps{name="frontcam", pid="250", type="skipped_fps"} 0.0
# HELP frigate_storage Camera FPS
# TYPE frigate_storage gauge
frigate_storage{mount_type="tmpfs",drive="/dev/shm",type="free"} 1072.7,
frigate_storage{mount_type="tmpfs",drive="/dev/shm",type="total"} 1073.7,
frigate_storage{mount_type="tmpfs",drive="/dev/shm",type="used"} 1.0,
completely dry written and untested https://prometheus.io/docs/concepts/metric_types/
I've been meaning to circle back around to this, but it looks like @corgan2222 beat me to it. Thanks!
There's a workaround if you use home assistant. The historical backend can use influxdb, and visualize it in grafana.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
With the recent beta there's a nice stats page, it would be cool if those were available via a metrics endpoint for Prometheus to scrape.
With the recent beta there's a nice stats page, it would be cool if those were available via a metrics endpoint for Prometheus to scrape.
@NickM-27 what do you think?
I have no knowledge of Prometheus, if someone wants to write a PR for this then I believe based on above we'd be happy to accept it.
I have made a simple docker container that exports the Frigate /api/stats
endpoint to Prometheus metrics. See here:
@bairhys Great work! Any chance you could PR this here and make it part of Frigate?
I agree here... @NickM-27 / @blakeblackshear if this were ported into Frigate, where in the code repo would you like to see this? Part of Stats? A new file? I could take a stab this weekend at this but id would be great to know where this fits in code base.
From what I can gather it's just an http endpoint, right? I would just add one alongside /stats
in http.py if so. Just avoid running anything or doing any work in that call so it doesn't use extra resources.
@blakeblackshear @NickM-27 I got something working. However, I need some guidance around a goofy Python issue.
I added prometheus client to the wheels requirement text, rebuilt the container... All is well.
However, when I attempt to use Prometheus client I get a strange conflicting HTTP lib error
in the python interpreter I typed
"from prometheus_client import start_http_server, Summary "
which outputs
from prometheus_client import start_http_server, Summary
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/dist-packages/prometheus_client/__init__.py", line 3, in <module>
from . import (
File "/usr/local/lib/python3.9/dist-packages/prometheus_client/exposition.py", line 4, in <module>
from http.server import BaseHTTPRequestHandler
File "/workspace/frigate/frigate/http.py", line 21, in <module>
from flask import (
File "/usr/local/lib/python3.9/dist-packages/flask/__init__.py", line 4, in <module>
from . import json as json
File "/usr/local/lib/python3.9/dist-packages/flask/json/__init__.py", line 8, in <module>
from ..globals import current_app
File "/usr/local/lib/python3.9/dist-packages/flask/globals.py", line 4, in <module>
from werkzeug.local import LocalProxy
File "/usr/local/lib/python3.9/dist-packages/werkzeug/__init__.py", line 1, in <module>
from .serving import run_simple as run_simple
File "/usr/local/lib/python3.9/dist-packages/werkzeug/serving.py", line 24, in <module>
from http.server import BaseHTTPRequestHandler
ModuleNotFoundError: No module named 'http.server'; 'http' is not a package
Now when I rename Frigates Http.py file to something like Httpz.py everything works.
It seems like some sort of namespace collision. Do you have any ideas how to fix? Python is not my primary language. I know you can import AS alias to avoid some of these collisions, but this seems to happen deeper somewhere?
Is it required for another http server to be running for that? Could the existing server not be used?
As far as the actual issue, does it work if you use an import alias?
Is it required for another http server to be running for that? Could the existing server not be used?
As far as the actual issue, does it work if you use an import alias?
Still learning prometheus myself too , im just following the docs .. https://github.com/prometheus/client_python
I do see this now in the docs
"To add Prometheus exposition to an existing HTTP server, see the MetricsHandler class which provides a BaseHTTPRequestHandler. It also serves as a simple example of how to write a custom endpoint."
Describe what you are trying to accomplish and why in non technical terms I'd like to be able to monitor the health and performance of Frigate using modern and well-supported industry standards.
Describe the solution you'd like Prometheus instrumentation throughout the codebase to aggregate useful metrics which are exposed at an appropriate endpoint (i.e:
/metrics
) in order for Prometheus to scrape periodically.Describe alternatives you've considered The alternative is lack of observability with regard to the internal performance and health of Frigate.
Additional context Add any other context or screenshots about the feature request here. Python Prometheus client